2025-09-27 20:46:11.434625 | Job console starting 2025-09-27 20:46:11.449387 | Updating git repos 2025-09-27 20:46:11.546105 | Cloning repos into workspace 2025-09-27 20:46:11.762391 | Restoring repo states 2025-09-27 20:46:11.786155 | Merging changes 2025-09-27 20:46:11.786177 | Checking out repos 2025-09-27 20:46:12.165040 | Preparing playbooks 2025-09-27 20:46:12.772723 | Running Ansible setup 2025-09-27 20:46:16.894639 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-09-27 20:46:17.645338 | 2025-09-27 20:46:17.645483 | PLAY [Base pre] 2025-09-27 20:46:17.661903 | 2025-09-27 20:46:17.662032 | TASK [Setup log path fact] 2025-09-27 20:46:17.691942 | orchestrator | ok 2025-09-27 20:46:17.708973 | 2025-09-27 20:46:17.709101 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-27 20:46:17.738674 | orchestrator | ok 2025-09-27 20:46:17.750083 | 2025-09-27 20:46:17.750185 | TASK [emit-job-header : Print job information] 2025-09-27 20:46:17.790869 | # Job Information 2025-09-27 20:46:17.791050 | Ansible Version: 2.16.14 2025-09-27 20:46:17.791089 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-09-27 20:46:17.791128 | Pipeline: post 2025-09-27 20:46:17.791156 | Executor: 521e9411259a 2025-09-27 20:46:17.791181 | Triggered by: https://github.com/osism/testbed/commit/4f33a263a99f154df71b80dc139f414dbd711171 2025-09-27 20:46:17.791207 | Event ID: fb94e8ca-9be2-11f0-8f95-fd1d1764df03 2025-09-27 20:46:17.798060 | 2025-09-27 20:46:17.798166 | LOOP [emit-job-header : Print node information] 2025-09-27 20:46:17.917206 | orchestrator | ok: 2025-09-27 20:46:17.917632 | orchestrator | # Node Information 2025-09-27 20:46:17.917725 | orchestrator | Inventory Hostname: orchestrator 2025-09-27 20:46:17.917782 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-09-27 20:46:17.917833 | orchestrator | Username: zuul-testbed02 2025-09-27 20:46:17.917881 | orchestrator | Distro: Debian 12.12 2025-09-27 20:46:17.918008 | orchestrator | Provider: static-testbed 2025-09-27 20:46:17.918058 | orchestrator | Region: 2025-09-27 20:46:17.918097 | orchestrator | Label: testbed-orchestrator 2025-09-27 20:46:17.918134 | orchestrator | Product Name: OpenStack Nova 2025-09-27 20:46:17.918169 | orchestrator | Interface IP: 81.163.193.140 2025-09-27 20:46:17.942183 | 2025-09-27 20:46:17.942331 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-09-27 20:46:18.391561 | orchestrator -> localhost | changed 2025-09-27 20:46:18.401890 | 2025-09-27 20:46:18.402019 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-09-27 20:46:19.415918 | orchestrator -> localhost | changed 2025-09-27 20:46:19.431151 | 2025-09-27 20:46:19.431293 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-09-27 20:46:19.698391 | orchestrator -> localhost | ok 2025-09-27 20:46:19.705783 | 2025-09-27 20:46:19.705916 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-09-27 20:46:19.735368 | orchestrator | ok 2025-09-27 20:46:19.752230 | orchestrator | included: /var/lib/zuul/builds/cfb964a163214dfcab0d7f04ee6fb101/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-09-27 20:46:19.760617 | 2025-09-27 20:46:19.760768 | TASK [add-build-sshkey : Create Temp SSH key] 2025-09-27 20:46:20.666586 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-09-27 20:46:20.667015 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/cfb964a163214dfcab0d7f04ee6fb101/work/cfb964a163214dfcab0d7f04ee6fb101_id_rsa 2025-09-27 20:46:20.667099 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/cfb964a163214dfcab0d7f04ee6fb101/work/cfb964a163214dfcab0d7f04ee6fb101_id_rsa.pub 2025-09-27 20:46:20.667152 | orchestrator -> localhost | The key fingerprint is: 2025-09-27 20:46:20.667199 | orchestrator -> localhost | SHA256:/h2/QtSpnzv23wjb4fma1/yKeeD+ZT665rQxs9butc0 zuul-build-sshkey 2025-09-27 20:46:20.667244 | orchestrator -> localhost | The key's randomart image is: 2025-09-27 20:46:20.667318 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-09-27 20:46:20.667364 | orchestrator -> localhost | | | 2025-09-27 20:46:20.667407 | orchestrator -> localhost | | | 2025-09-27 20:46:20.667448 | orchestrator -> localhost | | . . | 2025-09-27 20:46:20.667485 | orchestrator -> localhost | | . o | 2025-09-27 20:46:20.667523 | orchestrator -> localhost | | S . . | 2025-09-27 20:46:20.667574 | orchestrator -> localhost | | . + | 2025-09-27 20:46:20.667613 | orchestrator -> localhost | | . ooBoo=| 2025-09-27 20:46:20.667650 | orchestrator -> localhost | | . .+%%%O| 2025-09-27 20:46:20.667689 | orchestrator -> localhost | | ..X@^&E| 2025-09-27 20:46:20.667727 | orchestrator -> localhost | +----[SHA256]-----+ 2025-09-27 20:46:20.667830 | orchestrator -> localhost | ok: Runtime: 0:00:00.433477 2025-09-27 20:46:20.681421 | 2025-09-27 20:46:20.681558 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-09-27 20:46:20.718443 | orchestrator | ok 2025-09-27 20:46:20.729098 | orchestrator | included: /var/lib/zuul/builds/cfb964a163214dfcab0d7f04ee6fb101/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-09-27 20:46:20.738514 | 2025-09-27 20:46:20.738610 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-09-27 20:46:20.761850 | orchestrator | skipping: Conditional result was False 2025-09-27 20:46:20.769614 | 2025-09-27 20:46:20.769720 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-09-27 20:46:21.338211 | orchestrator | changed 2025-09-27 20:46:21.347508 | 2025-09-27 20:46:21.347628 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-09-27 20:46:21.627968 | orchestrator | ok 2025-09-27 20:46:21.635890 | 2025-09-27 20:46:21.636003 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-09-27 20:46:22.066889 | orchestrator | ok 2025-09-27 20:46:22.075266 | 2025-09-27 20:46:22.075427 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-09-27 20:46:22.486400 | orchestrator | ok 2025-09-27 20:46:22.495005 | 2025-09-27 20:46:22.495135 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-09-27 20:46:22.519308 | orchestrator | skipping: Conditional result was False 2025-09-27 20:46:22.535601 | 2025-09-27 20:46:22.535748 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-09-27 20:46:22.988487 | orchestrator -> localhost | changed 2025-09-27 20:46:23.012592 | 2025-09-27 20:46:23.012730 | TASK [add-build-sshkey : Add back temp key] 2025-09-27 20:46:23.329035 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/cfb964a163214dfcab0d7f04ee6fb101/work/cfb964a163214dfcab0d7f04ee6fb101_id_rsa (zuul-build-sshkey) 2025-09-27 20:46:23.329308 | orchestrator -> localhost | ok: Runtime: 0:00:00.016912 2025-09-27 20:46:23.336624 | 2025-09-27 20:46:23.336724 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-09-27 20:46:23.732258 | orchestrator | ok 2025-09-27 20:46:23.738502 | 2025-09-27 20:46:23.738621 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-09-27 20:46:23.762482 | orchestrator | skipping: Conditional result was False 2025-09-27 20:46:23.810602 | 2025-09-27 20:46:23.810734 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-09-27 20:46:24.229210 | orchestrator | ok 2025-09-27 20:46:24.256964 | 2025-09-27 20:46:24.257134 | TASK [validate-host : Define zuul_info_dir fact] 2025-09-27 20:46:24.304959 | orchestrator | ok 2025-09-27 20:46:24.316059 | 2025-09-27 20:46:24.316187 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-09-27 20:46:24.623967 | orchestrator -> localhost | ok 2025-09-27 20:46:24.639757 | 2025-09-27 20:46:24.639902 | TASK [validate-host : Collect information about the host] 2025-09-27 20:46:25.864992 | orchestrator | ok 2025-09-27 20:46:25.889192 | 2025-09-27 20:46:25.889368 | TASK [validate-host : Sanitize hostname] 2025-09-27 20:46:25.967141 | orchestrator | ok 2025-09-27 20:46:25.976685 | 2025-09-27 20:46:25.976833 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-09-27 20:46:26.579481 | orchestrator -> localhost | changed 2025-09-27 20:46:26.586155 | 2025-09-27 20:46:26.586265 | TASK [validate-host : Collect information about zuul worker] 2025-09-27 20:46:27.024834 | orchestrator | ok 2025-09-27 20:46:27.038650 | 2025-09-27 20:46:27.038950 | TASK [validate-host : Write out all zuul information for each host] 2025-09-27 20:46:27.616343 | orchestrator -> localhost | changed 2025-09-27 20:46:27.636104 | 2025-09-27 20:46:27.636246 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-09-27 20:46:27.912043 | orchestrator | ok 2025-09-27 20:46:27.920961 | 2025-09-27 20:46:27.921237 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-09-27 20:47:06.389000 | orchestrator | changed: 2025-09-27 20:47:06.389207 | orchestrator | .d..t...... src/ 2025-09-27 20:47:06.389242 | orchestrator | .d..t...... src/github.com/ 2025-09-27 20:47:06.389268 | orchestrator | .d..t...... src/github.com/osism/ 2025-09-27 20:47:06.389311 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-09-27 20:47:06.389333 | orchestrator | RedHat.yml 2025-09-27 20:47:06.403142 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-09-27 20:47:06.403162 | orchestrator | RedHat.yml 2025-09-27 20:47:06.403215 | orchestrator | = 2.2.0"... 2025-09-27 20:47:35.193741 | orchestrator | 20:47:35.193 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-09-27 20:47:35.217928 | orchestrator | 20:47:35.217 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-09-27 20:47:35.376673 | orchestrator | 20:47:35.376 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-09-27 20:47:35.881295 | orchestrator | 20:47:35.881 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-09-27 20:47:35.969679 | orchestrator | 20:47:35.968 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-09-27 20:47:36.507920 | orchestrator | 20:47:36.507 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-09-27 20:47:36.585112 | orchestrator | 20:47:36.584 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-09-27 20:47:37.287952 | orchestrator | 20:47:37.287 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-09-27 20:47:37.288020 | orchestrator | 20:47:37.287 STDOUT terraform: Providers are signed by their developers. 2025-09-27 20:47:37.288029 | orchestrator | 20:47:37.287 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-09-27 20:47:37.288037 | orchestrator | 20:47:37.287 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-09-27 20:47:37.288043 | orchestrator | 20:47:37.287 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-09-27 20:47:37.288096 | orchestrator | 20:47:37.288 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-09-27 20:47:37.288196 | orchestrator | 20:47:37.288 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-09-27 20:47:37.288204 | orchestrator | 20:47:37.288 STDOUT terraform: you run "tofu init" in the future. 2025-09-27 20:47:37.288213 | orchestrator | 20:47:37.288 STDOUT terraform: OpenTofu has been successfully initialized! 2025-09-27 20:47:37.288285 | orchestrator | 20:47:37.288 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-09-27 20:47:37.288314 | orchestrator | 20:47:37.288 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-09-27 20:47:37.288323 | orchestrator | 20:47:37.288 STDOUT terraform: should now work. 2025-09-27 20:47:37.288381 | orchestrator | 20:47:37.288 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-09-27 20:47:37.288423 | orchestrator | 20:47:37.288 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-09-27 20:47:37.288472 | orchestrator | 20:47:37.288 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-09-27 20:47:37.385527 | orchestrator | 20:47:37.383 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-09-27 20:47:37.385608 | orchestrator | 20:47:37.383 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-09-27 20:47:37.585182 | orchestrator | 20:47:37.585 STDOUT terraform: Created and switched to workspace "ci"! 2025-09-27 20:47:37.585268 | orchestrator | 20:47:37.585 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-09-27 20:47:37.585403 | orchestrator | 20:47:37.585 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-09-27 20:47:37.585475 | orchestrator | 20:47:37.585 STDOUT terraform: for this configuration. 2025-09-27 20:47:37.701156 | orchestrator | 20:47:37.701 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-09-27 20:47:37.701194 | orchestrator | 20:47:37.701 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-09-27 20:47:37.793584 | orchestrator | 20:47:37.792 STDOUT terraform: ci.auto.tfvars 2025-09-27 20:47:38.056521 | orchestrator | 20:47:38.054 STDOUT terraform: default_custom.tf 2025-09-27 20:47:38.657735 | orchestrator | 20:47:38.656 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-09-27 20:47:39.606564 | orchestrator | 20:47:39.606 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-09-27 20:47:40.134092 | orchestrator | 20:47:40.133 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-09-27 20:47:40.356924 | orchestrator | 20:47:40.354 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-09-27 20:47:40.356984 | orchestrator | 20:47:40.354 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-09-27 20:47:40.356993 | orchestrator | 20:47:40.354 STDOUT terraform:  + create 2025-09-27 20:47:40.357001 | orchestrator | 20:47:40.354 STDOUT terraform:  <= read (data resources) 2025-09-27 20:47:40.357009 | orchestrator | 20:47:40.354 STDOUT terraform: OpenTofu will perform the following actions: 2025-09-27 20:47:40.357015 | orchestrator | 20:47:40.354 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-09-27 20:47:40.357022 | orchestrator | 20:47:40.354 STDOUT terraform:  # (config refers to values not yet known) 2025-09-27 20:47:40.357029 | orchestrator | 20:47:40.354 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-09-27 20:47:40.357036 | orchestrator | 20:47:40.354 STDOUT terraform:  + checksum = (known after apply) 2025-09-27 20:47:40.357042 | orchestrator | 20:47:40.354 STDOUT terraform:  + created_at = (known after apply) 2025-09-27 20:47:40.357049 | orchestrator | 20:47:40.354 STDOUT terraform:  + file = (known after apply) 2025-09-27 20:47:40.357056 | orchestrator | 20:47:40.354 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.357062 | orchestrator | 20:47:40.354 STDOUT terraform:  + metadata = (known after apply) 2025-09-27 20:47:40.357083 | orchestrator | 20:47:40.354 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-27 20:47:40.357090 | orchestrator | 20:47:40.354 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-27 20:47:40.357097 | orchestrator | 20:47:40.354 STDOUT terraform:  + most_recent = true 2025-09-27 20:47:40.357103 | orchestrator | 20:47:40.354 STDOUT terraform:  + name = (known after apply) 2025-09-27 20:47:40.357110 | orchestrator | 20:47:40.354 STDOUT terraform:  + protected = (known after apply) 2025-09-27 20:47:40.357116 | orchestrator | 20:47:40.354 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.357123 | orchestrator | 20:47:40.354 STDOUT terraform:  + schema = (known after apply) 2025-09-27 20:47:40.357130 | orchestrator | 20:47:40.354 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-27 20:47:40.357137 | orchestrator | 20:47:40.354 STDOUT terraform:  + tags = (known after apply) 2025-09-27 20:47:40.357143 | orchestrator | 20:47:40.354 STDOUT terraform:  + updated_at = (known after apply) 2025-09-27 20:47:40.357150 | orchestrator | 20:47:40.354 STDOUT terraform:  } 2025-09-27 20:47:40.357160 | orchestrator | 20:47:40.354 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-09-27 20:47:40.357167 | orchestrator | 20:47:40.354 STDOUT terraform:  # (config refers to values not yet known) 2025-09-27 20:47:40.357173 | orchestrator | 20:47:40.354 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-09-27 20:47:40.357180 | orchestrator | 20:47:40.354 STDOUT terraform:  + checksum = (known after apply) 2025-09-27 20:47:40.357186 | orchestrator | 20:47:40.354 STDOUT terraform:  + created_at = (known after apply) 2025-09-27 20:47:40.357193 | orchestrator | 20:47:40.354 STDOUT terraform:  + file = (known after apply) 2025-09-27 20:47:40.357200 | orchestrator | 20:47:40.354 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.357206 | orchestrator | 20:47:40.354 STDOUT terraform:  + metadata = (known after apply) 2025-09-27 20:47:40.357213 | orchestrator | 20:47:40.354 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-27 20:47:40.357219 | orchestrator | 20:47:40.355 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-27 20:47:40.357230 | orchestrator | 20:47:40.355 STDOUT terraform:  + most_recent = true 2025-09-27 20:47:40.357237 | orchestrator | 20:47:40.355 STDOUT terraform:  + name = (known after apply) 2025-09-27 20:47:40.357244 | orchestrator | 20:47:40.355 STDOUT terraform:  + protected = (known after apply) 2025-09-27 20:47:40.357250 | orchestrator | 20:47:40.355 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.357294 | orchestrator | 20:47:40.355 STDOUT terraform:  + schema = (known after apply) 2025-09-27 20:47:40.357303 | orchestrator | 20:47:40.355 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-27 20:47:40.357309 | orchestrator | 20:47:40.355 STDOUT terraform:  + tags = (known after apply) 2025-09-27 20:47:40.357316 | orchestrator | 20:47:40.355 STDOUT terraform:  + updated_at = (known after apply) 2025-09-27 20:47:40.357323 | orchestrator | 20:47:40.355 STDOUT terraform:  } 2025-09-27 20:47:40.357329 | orchestrator | 20:47:40.355 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-09-27 20:47:40.357341 | orchestrator | 20:47:40.355 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-09-27 20:47:40.357347 | orchestrator | 20:47:40.355 STDOUT terraform:  + content = (known after apply) 2025-09-27 20:47:40.357354 | orchestrator | 20:47:40.355 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-27 20:47:40.357361 | orchestrator | 20:47:40.355 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-27 20:47:40.357367 | orchestrator | 20:47:40.355 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-27 20:47:40.357374 | orchestrator | 20:47:40.355 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-27 20:47:40.357380 | orchestrator | 20:47:40.355 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-27 20:47:40.357387 | orchestrator | 20:47:40.355 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-27 20:47:40.357394 | orchestrator | 20:47:40.355 STDOUT terraform:  + directory_permission = "0777" 2025-09-27 20:47:40.357401 | orchestrator | 20:47:40.355 STDOUT terraform:  + file_permission = "0644" 2025-09-27 20:47:40.357407 | orchestrator | 20:47:40.355 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-09-27 20:47:40.357414 | orchestrator | 20:47:40.355 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.357421 | orchestrator | 20:47:40.355 STDOUT terraform:  } 2025-09-27 20:47:40.357427 | orchestrator | 20:47:40.355 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-09-27 20:47:40.357434 | orchestrator | 20:47:40.355 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-09-27 20:47:40.357440 | orchestrator | 20:47:40.355 STDOUT terraform:  + content = (known after apply) 2025-09-27 20:47:40.357447 | orchestrator | 20:47:40.355 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-27 20:47:40.357454 | orchestrator | 20:47:40.355 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-27 20:47:40.357460 | orchestrator | 20:47:40.355 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-27 20:47:40.357467 | orchestrator | 20:47:40.355 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-27 20:47:40.357473 | orchestrator | 20:47:40.355 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-27 20:47:40.357480 | orchestrator | 20:47:40.355 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-27 20:47:40.357486 | orchestrator | 20:47:40.355 STDOUT terraform:  + directory_permission = "0777" 2025-09-27 20:47:40.357493 | orchestrator | 20:47:40.355 STDOUT terraform:  + file_permission = "0644" 2025-09-27 20:47:40.357499 | orchestrator | 20:47:40.355 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-09-27 20:47:40.357506 | orchestrator | 20:47:40.355 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.357512 | orchestrator | 20:47:40.355 STDOUT terraform:  } 2025-09-27 20:47:40.357522 | orchestrator | 20:47:40.355 STDOUT terraform:  # local_file.inventory will be created 2025-09-27 20:47:40.357530 | orchestrator | 20:47:40.355 STDOUT terraform:  + resource "local_file" "inventory" { 2025-09-27 20:47:40.357536 | orchestrator | 20:47:40.355 STDOUT terraform:  + content = (known after apply) 2025-09-27 20:47:40.357547 | orchestrator | 20:47:40.356 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-27 20:47:40.357553 | orchestrator | 20:47:40.356 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-27 20:47:40.357565 | orchestrator | 20:47:40.356 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-27 20:47:40.357572 | orchestrator | 20:47:40.356 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-27 20:47:40.357578 | orchestrator | 20:47:40.356 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-27 20:47:40.357585 | orchestrator | 20:47:40.356 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-27 20:47:40.357591 | orchestrator | 20:47:40.356 STDOUT terraform:  + directory_permission = "0777" 2025-09-27 20:47:40.357598 | orchestrator | 20:47:40.356 STDOUT terraform:  + file_permission = "0644" 2025-09-27 20:47:40.357605 | orchestrator | 20:47:40.356 STDOUT terraform:  + filename = "inventory.ci" 2025-09-27 20:47:40.357611 | orchestrator | 20:47:40.356 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.357618 | orchestrator | 20:47:40.356 STDOUT terraform:  } 2025-09-27 20:47:40.357624 | orchestrator | 20:47:40.356 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-09-27 20:47:40.357631 | orchestrator | 20:47:40.356 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-09-27 20:47:40.357638 | orchestrator | 20:47:40.356 STDOUT terraform:  + content = (sensitive value) 2025-09-27 20:47:40.357645 | orchestrator | 20:47:40.356 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-27 20:47:40.357651 | orchestrator | 20:47:40.356 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-27 20:47:40.357658 | orchestrator | 20:47:40.356 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-27 20:47:40.357665 | orchestrator | 20:47:40.356 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-27 20:47:40.357671 | orchestrator | 20:47:40.356 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-27 20:47:40.357678 | orchestrator | 20:47:40.356 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-27 20:47:40.357684 | orchestrator | 20:47:40.356 STDOUT terraform:  + directory_permission = "0700" 2025-09-27 20:47:40.357691 | orchestrator | 20:47:40.356 STDOUT terraform:  + file_permission = "0600" 2025-09-27 20:47:40.357697 | orchestrator | 20:47:40.356 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-09-27 20:47:40.357704 | orchestrator | 20:47:40.356 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.357711 | orchestrator | 20:47:40.356 STDOUT terraform:  } 2025-09-27 20:47:40.357718 | orchestrator | 20:47:40.356 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-09-27 20:47:40.357724 | orchestrator | 20:47:40.356 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-09-27 20:47:40.357731 | orchestrator | 20:47:40.356 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.357737 | orchestrator | 20:47:40.356 STDOUT terraform:  } 2025-09-27 20:47:40.357744 | orchestrator | 20:47:40.356 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-09-27 20:47:40.357758 | orchestrator | 20:47:40.356 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-09-27 20:47:40.357765 | orchestrator | 20:47:40.356 STDOUT terraform:  + attachment = (known after apply) 2025-09-27 20:47:40.357772 | orchestrator | 20:47:40.356 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 20:47:40.357779 | orchestrator | 20:47:40.356 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.357785 | orchestrator | 20:47:40.356 STDOUT terraform:  + image_id = (known after apply) 2025-09-27 20:47:40.357792 | orchestrator | 20:47:40.356 STDOUT terraform:  + metadata = (known after apply) 2025-09-27 20:47:40.357798 | orchestrator | 20:47:40.357 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-09-27 20:47:40.357805 | orchestrator | 20:47:40.357 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.357812 | orchestrator | 20:47:40.357 STDOUT terraform:  + size = 80 2025-09-27 20:47:40.357823 | orchestrator | 20:47:40.357 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-27 20:47:40.357830 | orchestrator | 20:47:40.357 STDOUT terraform:  + volume_type = "ssd" 2025-09-27 20:47:40.357836 | orchestrator | 20:47:40.357 STDOUT terraform:  } 2025-09-27 20:47:40.357843 | orchestrator | 20:47:40.357 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-09-27 20:47:40.357849 | orchestrator | 20:47:40.357 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-27 20:47:40.357856 | orchestrator | 20:47:40.357 STDOUT terraform:  + attachment = (known after apply) 2025-09-27 20:47:40.357863 | orchestrator | 20:47:40.357 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 20:47:40.357869 | orchestrator | 20:47:40.357 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.357876 | orchestrator | 20:47:40.357 STDOUT terraform:  + image_id = (known after apply) 2025-09-27 20:47:40.357882 | orchestrator | 20:47:40.357 STDOUT terraform:  + metadata = (known after apply) 2025-09-27 20:47:40.357889 | orchestrator | 20:47:40.357 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-09-27 20:47:40.357896 | orchestrator | 20:47:40.357 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.357902 | orchestrator | 20:47:40.357 STDOUT terraform:  + size = 80 2025-09-27 20:47:40.357909 | orchestrator | 20:47:40.357 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-27 20:47:40.357915 | orchestrator | 20:47:40.357 STDOUT terraform:  + volume_type = "ssd" 2025-09-27 20:47:40.357922 | orchestrator | 20:47:40.357 STDOUT terraform:  } 2025-09-27 20:47:40.357928 | orchestrator | 20:47:40.357 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-09-27 20:47:40.357935 | orchestrator | 20:47:40.357 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-27 20:47:40.357942 | orchestrator | 20:47:40.357 STDOUT terraform:  + attachment = (known after apply) 2025-09-27 20:47:40.357952 | orchestrator | 20:47:40.357 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 20:47:40.357959 | orchestrator | 20:47:40.357 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.357965 | orchestrator | 20:47:40.357 STDOUT terraform:  + image_id = (known after apply) 2025-09-27 20:47:40.357972 | orchestrator | 20:47:40.357 STDOUT terraform:  + metadata = (known after apply) 2025-09-27 20:47:40.357979 | orchestrator | 20:47:40.357 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-09-27 20:47:40.357985 | orchestrator | 20:47:40.357 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.357992 | orchestrator | 20:47:40.357 STDOUT terraform:  + size = 80 2025-09-27 20:47:40.357999 | orchestrator | 20:47:40.357 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-27 20:47:40.358005 | orchestrator | 20:47:40.357 STDOUT terraform:  + volume_type = "ssd" 2025-09-27 20:47:40.358031 | orchestrator | 20:47:40.357 STDOUT terraform:  } 2025-09-27 20:47:40.358039 | orchestrator | 20:47:40.357 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-09-27 20:47:40.358046 | orchestrator | 20:47:40.357 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-27 20:47:40.358052 | orchestrator | 20:47:40.357 STDOUT terraform:  + attachment = (known after apply) 2025-09-27 20:47:40.358063 | orchestrator | 20:47:40.357 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 20:47:40.358069 | orchestrator | 20:47:40.357 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.358079 | orchestrator | 20:47:40.357 STDOUT terraform:  + image_id = (known after apply) 2025-09-27 20:47:40.358085 | orchestrator | 20:47:40.358 STDOUT terraform:  + metadata = (known after apply) 2025-09-27 20:47:40.358135 | orchestrator | 20:47:40.358 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-09-27 20:47:40.358168 | orchestrator | 20:47:40.358 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.358200 | orchestrator | 20:47:40.358 STDOUT terraform:  + size = 80 2025-09-27 20:47:40.358223 | orchestrator | 20:47:40.358 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-27 20:47:40.358245 | orchestrator | 20:47:40.358 STDOUT terraform:  + volume_type = "ssd" 2025-09-27 20:47:40.358255 | orchestrator | 20:47:40.358 STDOUT terraform:  } 2025-09-27 20:47:40.358330 | orchestrator | 20:47:40.358 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-09-27 20:47:40.358388 | orchestrator | 20:47:40.358 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-27 20:47:40.358442 | orchestrator | 20:47:40.358 STDOUT terraform:  + attachment = (known after apply) 2025-09-27 20:47:40.358466 | orchestrator | 20:47:40.358 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 20:47:40.358513 | orchestrator | 20:47:40.358 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.358553 | orchestrator | 20:47:40.358 STDOUT terraform:  + image_id = (known after apply) 2025-09-27 20:47:40.361082 | orchestrator | 20:47:40.358 STDOUT terraform:  + metadata = (known after apply) 2025-09-27 20:47:40.361117 | orchestrator | 20:47:40.358 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-09-27 20:47:40.361124 | orchestrator | 20:47:40.358 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.361131 | orchestrator | 20:47:40.358 STDOUT terraform:  + size = 80 2025-09-27 20:47:40.361137 | orchestrator | 20:47:40.358 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-27 20:47:40.361144 | orchestrator | 20:47:40.358 STDOUT terraform:  + volume_type = "ssd" 2025-09-27 20:47:40.361151 | orchestrator | 20:47:40.358 STDOUT terraform:  } 2025-09-27 20:47:40.361157 | orchestrator | 20:47:40.358 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-09-27 20:47:40.361172 | orchestrator | 20:47:40.358 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-27 20:47:40.361179 | orchestrator | 20:47:40.358 STDOUT terraform:  + attachment = (known after apply) 2025-09-27 20:47:40.361185 | orchestrator | 20:47:40.358 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 20:47:40.361191 | orchestrator | 20:47:40.358 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.361209 | orchestrator | 20:47:40.358 STDOUT terraform:  + image_id = (known after apply) 2025-09-27 20:47:40.361215 | orchestrator | 20:47:40.358 STDOUT terraform:  + metadata = (known after apply) 2025-09-27 20:47:40.361222 | orchestrator | 20:47:40.358 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-09-27 20:47:40.361228 | orchestrator | 20:47:40.359 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.361234 | orchestrator | 20:47:40.359 STDOUT terraform:  + size = 80 2025-09-27 20:47:40.361240 | orchestrator | 20:47:40.359 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-27 20:47:40.361246 | orchestrator | 20:47:40.359 STDOUT terraform:  + volume_type = "ssd" 2025-09-27 20:47:40.361252 | orchestrator | 20:47:40.359 STDOUT terraform:  } 2025-09-27 20:47:40.361258 | orchestrator | 20:47:40.359 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-09-27 20:47:40.361264 | orchestrator | 20:47:40.359 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-27 20:47:40.361270 | orchestrator | 20:47:40.359 STDOUT terraform:  + attachment = (known after apply) 2025-09-27 20:47:40.361316 | orchestrator | 20:47:40.359 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 20:47:40.361324 | orchestrator | 20:47:40.359 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.361330 | orchestrator | 20:47:40.359 STDOUT terraform:  + image_id = (known after apply) 2025-09-27 20:47:40.361336 | orchestrator | 20:47:40.359 STDOUT terraform:  + metadata = (known after apply) 2025-09-27 20:47:40.361342 | orchestrator | 20:47:40.359 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-09-27 20:47:40.361348 | orchestrator | 20:47:40.359 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.361355 | orchestrator | 20:47:40.359 STDOUT terraform:  + size = 80 2025-09-27 20:47:40.361365 | orchestrator | 20:47:40.359 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-27 20:47:40.361371 | orchestrator | 20:47:40.359 STDOUT terraform:  + volume_type = "ssd" 2025-09-27 20:47:40.361377 | orchestrator | 20:47:40.359 STDOUT terraform:  } 2025-09-27 20:47:40.361383 | orchestrator | 20:47:40.359 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-09-27 20:47:40.361388 | orchestrator | 20:47:40.359 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-27 20:47:40.361396 | orchestrator | 20:47:40.359 STDOUT terraform:  + attachment = (known after apply) 2025-09-27 20:47:40.361402 | orchestrator | 20:47:40.359 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 20:47:40.361413 | orchestrator | 20:47:40.359 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.361418 | orchestrator | 20:47:40.359 STDOUT terraform:  + metadata = (known after apply) 2025-09-27 20:47:40.361424 | orchestrator | 20:47:40.359 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-09-27 20:47:40.361429 | orchestrator | 20:47:40.359 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.361434 | orchestrator | 20:47:40.359 STDOUT terraform:  + size = 20 2025-09-27 20:47:40.361440 | orchestrator | 20:47:40.359 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-27 20:47:40.361445 | orchestrator | 20:47:40.359 STDOUT terraform:  + volume_type = "ssd" 2025-09-27 20:47:40.361450 | orchestrator | 20:47:40.359 STDOUT terraform:  } 2025-09-27 20:47:40.361456 | orchestrator | 20:47:40.359 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-09-27 20:47:40.361461 | orchestrator | 20:47:40.359 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-27 20:47:40.361466 | orchestrator | 20:47:40.360 STDOUT terraform:  + attachment = (known after apply) 2025-09-27 20:47:40.361472 | orchestrator | 20:47:40.360 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 20:47:40.361477 | orchestrator | 20:47:40.360 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.361482 | orchestrator | 20:47:40.360 STDOUT terraform:  + metadata = (known after apply) 2025-09-27 20:47:40.361487 | orchestrator | 20:47:40.360 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-09-27 20:47:40.361493 | orchestrator | 20:47:40.360 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.361499 | orchestrator | 20:47:40.360 STDOUT terraform:  + size = 20 2025-09-27 20:47:40.361505 | orchestrator | 20:47:40.360 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-27 20:47:40.361510 | orchestrator | 20:47:40.360 STDOUT terraform:  + volume_type = "ssd" 2025-09-27 20:47:40.361515 | orchestrator | 20:47:40.360 STDOUT terraform:  } 2025-09-27 20:47:40.361521 | orchestrator | 20:47:40.360 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-09-27 20:47:40.361526 | orchestrator | 20:47:40.360 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-27 20:47:40.361531 | orchestrator | 20:47:40.360 STDOUT terraform:  + attachment = (known after apply) 2025-09-27 20:47:40.361540 | orchestrator | 20:47:40.360 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 20:47:40.361545 | orchestrator | 20:47:40.360 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.361550 | orchestrator | 20:47:40.360 STDOUT terraform:  + metadata = (known after apply) 2025-09-27 20:47:40.361556 | orchestrator | 20:47:40.360 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-09-27 20:47:40.361561 | orchestrator | 20:47:40.360 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.361566 | orchestrator | 20:47:40.360 STDOUT terraform:  + size = 20 2025-09-27 20:47:40.361575 | orchestrator | 20:47:40.360 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-27 20:47:40.361580 | orchestrator | 20:47:40.360 STDOUT terraform:  + volume_type = "ssd" 2025-09-27 20:47:40.361586 | orchestrator | 20:47:40.360 STDOUT terraform:  } 2025-09-27 20:47:40.361591 | orchestrator | 20:47:40.360 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-09-27 20:47:40.361596 | orchestrator | 20:47:40.360 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-27 20:47:40.361602 | orchestrator | 20:47:40.360 STDOUT terraform:  + attachment = (known after apply) 2025-09-27 20:47:40.361607 | orchestrator | 20:47:40.360 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 20:47:40.361612 | orchestrator | 20:47:40.360 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.361620 | orchestrator | 20:47:40.360 STDOUT terraform:  + metadata = (known after apply) 2025-09-27 20:47:40.361625 | orchestrator | 20:47:40.360 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-09-27 20:47:40.365243 | orchestrator | 20:47:40.363 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.365271 | orchestrator | 20:47:40.363 STDOUT terraform:  + size = 20 2025-09-27 20:47:40.365317 | orchestrator | 20:47:40.363 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-27 20:47:40.365324 | orchestrator | 20:47:40.363 STDOUT terraform:  + volume_type = "ssd" 2025-09-27 20:47:40.365330 | orchestrator | 20:47:40.363 STDOUT terraform:  } 2025-09-27 20:47:40.365336 | orchestrator | 20:47:40.363 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-09-27 20:47:40.365342 | orchestrator | 20:47:40.363 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-27 20:47:40.365347 | orchestrator | 20:47:40.363 STDOUT terraform:  + attachment = (known after apply) 2025-09-27 20:47:40.365353 | orchestrator | 20:47:40.363 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 20:47:40.365358 | orchestrator | 20:47:40.363 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.365364 | orchestrator | 20:47:40.363 STDOUT terraform:  + metadata = (known after apply) 2025-09-27 20:47:40.365369 | orchestrator | 20:47:40.363 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-09-27 20:47:40.365374 | orchestrator | 20:47:40.363 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.365389 | orchestrator | 20:47:40.363 STDOUT terraform:  + size = 20 2025-09-27 20:47:40.365395 | orchestrator | 20:47:40.363 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-27 20:47:40.365400 | orchestrator | 20:47:40.363 STDOUT terraform:  + volume_type = "ssd" 2025-09-27 20:47:40.365405 | orchestrator | 20:47:40.363 STDOUT terraform:  } 2025-09-27 20:47:40.365410 | orchestrator | 20:47:40.363 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-09-27 20:47:40.365415 | orchestrator | 20:47:40.363 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-27 20:47:40.365419 | orchestrator | 20:47:40.364 STDOUT terraform:  + attachment = (known after apply) 2025-09-27 20:47:40.365424 | orchestrator | 20:47:40.364 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 20:47:40.365429 | orchestrator | 20:47:40.364 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.365433 | orchestrator | 20:47:40.364 STDOUT terraform:  + metadata = (known after apply) 2025-09-27 20:47:40.365438 | orchestrator | 20:47:40.364 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-09-27 20:47:40.365443 | orchestrator | 20:47:40.364 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.365454 | orchestrator | 20:47:40.364 STDOUT terraform:  + size = 20 2025-09-27 20:47:40.365460 | orchestrator | 20:47:40.364 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-27 20:47:40.365464 | orchestrator | 20:47:40.364 STDOUT terraform:  + volume_type = "ssd" 2025-09-27 20:47:40.365469 | orchestrator | 20:47:40.364 STDOUT terraform:  } 2025-09-27 20:47:40.365474 | orchestrator | 20:47:40.364 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-09-27 20:47:40.365479 | orchestrator | 20:47:40.364 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-27 20:47:40.365484 | orchestrator | 20:47:40.364 STDOUT terraform:  + attachment = (known after apply) 2025-09-27 20:47:40.365488 | orchestrator | 20:47:40.364 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 20:47:40.365493 | orchestrator | 20:47:40.364 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.365498 | orchestrator | 20:47:40.364 STDOUT terraform:  + metadata = (known after apply) 2025-09-27 20:47:40.365503 | orchestrator | 20:47:40.364 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-09-27 20:47:40.365508 | orchestrator | 20:47:40.364 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.365519 | orchestrator | 20:47:40.364 STDOUT terraform:  + size = 20 2025-09-27 20:47:40.365524 | orchestrator | 20:47:40.364 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-27 20:47:40.365529 | orchestrator | 20:47:40.364 STDOUT terraform:  + volume_type = "ssd" 2025-09-27 20:47:40.365534 | orchestrator | 20:47:40.364 STDOUT terraform:  } 2025-09-27 20:47:40.365539 | orchestrator | 20:47:40.364 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-09-27 20:47:40.365544 | orchestrator | 20:47:40.364 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-27 20:47:40.365552 | orchestrator | 20:47:40.364 STDOUT terraform:  + attachment = (known after apply) 2025-09-27 20:47:40.365556 | orchestrator | 20:47:40.364 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 20:47:40.365561 | orchestrator | 20:47:40.364 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.365566 | orchestrator | 20:47:40.365 STDOUT terraform:  + metadata = (known after apply) 2025-09-27 20:47:40.365727 | orchestrator | 20:47:40.365 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-09-27 20:47:40.365749 | orchestrator | 20:47:40.365 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.365769 | orchestrator | 20:47:40.365 STDOUT terraform:  + size = 20 2025-09-27 20:47:40.365792 | orchestrator | 20:47:40.365 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-27 20:47:40.365814 | orchestrator | 20:47:40.365 STDOUT terraform:  + volume_type = "ssd" 2025-09-27 20:47:40.365822 | orchestrator | 20:47:40.365 STDOUT terraform:  } 2025-09-27 20:47:40.365878 | orchestrator | 20:47:40.365 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-09-27 20:47:40.365908 | orchestrator | 20:47:40.365 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-27 20:47:40.365980 | orchestrator | 20:47:40.365 STDOUT terraform:  + attachment = (known after apply) 2025-09-27 20:47:40.366034 | orchestrator | 20:47:40.365 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 20:47:40.366458 | orchestrator | 20:47:40.366 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.366774 | orchestrator | 20:47:40.366 STDOUT terraform:  + metadata = (known after apply) 2025-09-27 20:47:40.367377 | orchestrator | 20:47:40.366 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-09-27 20:47:40.368012 | orchestrator | 20:47:40.367 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.368193 | orchestrator | 20:47:40.368 STDOUT terraform:  + size = 20 2025-09-27 20:47:40.368367 | orchestrator | 20:47:40.368 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-27 20:47:40.368739 | orchestrator | 20:47:40.368 STDOUT terraform:  + volume_type = "ssd" 2025-09-27 20:47:40.368898 | orchestrator | 20:47:40.368 STDOUT terraform:  } 2025-09-27 20:47:40.369437 | orchestrator | 20:47:40.368 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-09-27 20:47:40.369854 | orchestrator | 20:47:40.369 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-09-27 20:47:40.369874 | orchestrator | 20:47:40.369 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-27 20:47:40.369912 | orchestrator | 20:47:40.369 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-27 20:47:40.370316 | orchestrator | 20:47:40.369 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-27 20:47:40.370395 | orchestrator | 20:47:40.370 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 20:47:40.370405 | orchestrator | 20:47:40.370 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 20:47:40.370431 | orchestrator | 20:47:40.370 STDOUT terraform:  + config_drive = true 2025-09-27 20:47:40.370474 | orchestrator | 20:47:40.370 STDOUT terraform:  + created = (known after apply) 2025-09-27 20:47:40.370502 | orchestrator | 20:47:40.370 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-27 20:47:40.370531 | orchestrator | 20:47:40.370 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-09-27 20:47:40.370555 | orchestrator | 20:47:40.370 STDOUT terraform:  + force_delete = false 2025-09-27 20:47:40.370588 | orchestrator | 20:47:40.370 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-27 20:47:40.370622 | orchestrator | 20:47:40.370 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.370659 | orchestrator | 20:47:40.370 STDOUT terraform:  + image_id = (known after apply) 2025-09-27 20:47:40.370697 | orchestrator | 20:47:40.370 STDOUT terraform:  + image_name = (known after apply) 2025-09-27 20:47:40.370723 | orchestrator | 20:47:40.370 STDOUT terraform:  + key_pair = "testbed" 2025-09-27 20:47:40.370752 | orchestrator | 20:47:40.370 STDOUT terraform:  + name = "testbed-manager" 2025-09-27 20:47:40.370777 | orchestrator | 20:47:40.370 STDOUT terraform:  + power_state = "active" 2025-09-27 20:47:40.370813 | orchestrator | 20:47:40.370 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.370846 | orchestrator | 20:47:40.370 STDOUT terraform:  + security_groups = (known after apply) 2025-09-27 20:47:40.370871 | orchestrator | 20:47:40.370 STDOUT terraform:  + stop_before_destroy = false 2025-09-27 20:47:40.370906 | orchestrator | 20:47:40.370 STDOUT terraform:  + updated = (known after apply) 2025-09-27 20:47:40.370934 | orchestrator | 20:47:40.370 STDOUT terraform:  + user_data = (sensitive value) 2025-09-27 20:47:40.370943 | orchestrator | 20:47:40.370 STDOUT terraform:  + block_device { 2025-09-27 20:47:40.370970 | orchestrator | 20:47:40.370 STDOUT terraform:  + boot_index = 0 2025-09-27 20:47:40.370998 | orchestrator | 20:47:40.370 STDOUT terraform:  + delete_on_termination = false 2025-09-27 20:47:40.371026 | orchestrator | 20:47:40.370 STDOUT terraform:  + destination_type = "volume" 2025-09-27 20:47:40.371054 | orchestrator | 20:47:40.371 STDOUT terraform:  + multiattach = false 2025-09-27 20:47:40.371081 | orchestrator | 20:47:40.371 STDOUT terraform:  + source_type = "volume" 2025-09-27 20:47:40.371117 | orchestrator | 20:47:40.371 STDOUT terraform:  + uuid = (known after apply) 2025-09-27 20:47:40.371125 | orchestrator | 20:47:40.371 STDOUT terraform:  } 2025-09-27 20:47:40.371145 | orchestrator | 20:47:40.371 STDOUT terraform:  + network { 2025-09-27 20:47:40.371165 | orchestrator | 20:47:40.371 STDOUT terraform:  + access_network = false 2025-09-27 20:47:40.371194 | orchestrator | 20:47:40.371 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-27 20:47:40.371224 | orchestrator | 20:47:40.371 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-27 20:47:40.371253 | orchestrator | 20:47:40.371 STDOUT terraform:  + mac = (known after apply) 2025-09-27 20:47:40.371295 | orchestrator | 20:47:40.371 STDOUT terraform:  + name = (known after apply) 2025-09-27 20:47:40.371329 | orchestrator | 20:47:40.371 STDOUT terraform:  + port = (known after apply) 2025-09-27 20:47:40.371359 | orchestrator | 20:47:40.371 STDOUT terraform:  + uuid = (known after apply) 2025-09-27 20:47:40.371367 | orchestrator | 20:47:40.371 STDOUT terraform:  } 2025-09-27 20:47:40.371386 | orchestrator | 20:47:40.371 STDOUT terraform:  } 2025-09-27 20:47:40.371452 | orchestrator | 20:47:40.371 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-09-27 20:47:40.371484 | orchestrator | 20:47:40.371 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-27 20:47:40.371518 | orchestrator | 20:47:40.371 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-27 20:47:40.371556 | orchestrator | 20:47:40.371 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-27 20:47:40.371586 | orchestrator | 20:47:40.371 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-27 20:47:40.371619 | orchestrator | 20:47:40.371 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 20:47:40.371643 | orchestrator | 20:47:40.371 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 20:47:40.371664 | orchestrator | 20:47:40.371 STDOUT terraform:  + config_drive = true 2025-09-27 20:47:40.371697 | orchestrator | 20:47:40.371 STDOUT terraform:  + created = (known after apply) 2025-09-27 20:47:40.371731 | orchestrator | 20:47:40.371 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-27 20:47:40.371760 | orchestrator | 20:47:40.371 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-27 20:47:40.371785 | orchestrator | 20:47:40.371 STDOUT terraform:  + force_delete = false 2025-09-27 20:47:40.371818 | orchestrator | 20:47:40.371 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-27 20:47:40.371851 | orchestrator | 20:47:40.371 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.371884 | orchestrator | 20:47:40.371 STDOUT terraform:  + image_id = (known after apply) 2025-09-27 20:47:40.371918 | orchestrator | 20:47:40.371 STDOUT terraform:  + image_name = (known after apply) 2025-09-27 20:47:40.371943 | orchestrator | 20:47:40.371 STDOUT terraform:  + key_pair = "testbed" 2025-09-27 20:47:40.371971 | orchestrator | 20:47:40.371 STDOUT terraform:  + name = "testbed-node-0" 2025-09-27 20:47:40.371994 | orchestrator | 20:47:40.371 STDOUT terraform:  + power_state = "active" 2025-09-27 20:47:40.372030 | orchestrator | 20:47:40.371 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.372063 | orchestrator | 20:47:40.372 STDOUT terraform:  + security_groups = (known after apply) 2025-09-27 20:47:40.372086 | orchestrator | 20:47:40.372 STDOUT terraform:  + stop_before_destroy = false 2025-09-27 20:47:40.372121 | orchestrator | 20:47:40.372 STDOUT terraform:  + updated = (known after apply) 2025-09-27 20:47:40.372171 | orchestrator | 20:47:40.372 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-27 20:47:40.372179 | orchestrator | 20:47:40.372 STDOUT terraform:  + block_device { 2025-09-27 20:47:40.372206 | orchestrator | 20:47:40.372 STDOUT terraform:  + boot_index = 0 2025-09-27 20:47:40.372232 | orchestrator | 20:47:40.372 STDOUT terraform:  + delete_on_termination = false 2025-09-27 20:47:40.372260 | orchestrator | 20:47:40.372 STDOUT terraform:  + destination_type = "volume" 2025-09-27 20:47:40.372368 | orchestrator | 20:47:40.372 STDOUT terraform:  + multiattach = false 2025-09-27 20:47:40.372386 | orchestrator | 20:47:40.372 STDOUT terraform:  + source_type = "volume" 2025-09-27 20:47:40.372391 | orchestrator | 20:47:40.372 STDOUT terraform:  + uuid = (known after apply) 2025-09-27 20:47:40.372397 | orchestrator | 20:47:40.372 STDOUT terraform:  } 2025-09-27 20:47:40.372405 | orchestrator | 20:47:40.372 STDOUT terraform:  + network { 2025-09-27 20:47:40.372410 | orchestrator | 20:47:40.372 STDOUT terraform:  + access_network = false 2025-09-27 20:47:40.372417 | orchestrator | 20:47:40.372 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-27 20:47:40.372443 | orchestrator | 20:47:40.372 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-27 20:47:40.372473 | orchestrator | 20:47:40.372 STDOUT terraform:  + mac = (known after apply) 2025-09-27 20:47:40.372502 | orchestrator | 20:47:40.372 STDOUT terraform:  + name = (known after apply) 2025-09-27 20:47:40.372530 | orchestrator | 20:47:40.372 STDOUT terraform:  + port = (known after apply) 2025-09-27 20:47:40.372560 | orchestrator | 20:47:40.372 STDOUT terraform:  + uuid = (known after apply) 2025-09-27 20:47:40.372568 | orchestrator | 20:47:40.372 STDOUT terraform:  } 2025-09-27 20:47:40.372575 | orchestrator | 20:47:40.372 STDOUT terraform:  } 2025-09-27 20:47:40.372622 | orchestrator | 20:47:40.372 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-09-27 20:47:40.372661 | orchestrator | 20:47:40.372 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-27 20:47:40.372695 | orchestrator | 20:47:40.372 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-27 20:47:40.372728 | orchestrator | 20:47:40.372 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-27 20:47:40.372762 | orchestrator | 20:47:40.372 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-27 20:47:40.372796 | orchestrator | 20:47:40.372 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 20:47:40.372818 | orchestrator | 20:47:40.372 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 20:47:40.372838 | orchestrator | 20:47:40.372 STDOUT terraform:  + config_drive = true 2025-09-27 20:47:40.372871 | orchestrator | 20:47:40.372 STDOUT terraform:  + created = (known after apply) 2025-09-27 20:47:40.372904 | orchestrator | 20:47:40.372 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-27 20:47:40.372935 | orchestrator | 20:47:40.372 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-27 20:47:40.372958 | orchestrator | 20:47:40.372 STDOUT terraform:  + force_delete = false 2025-09-27 20:47:40.372996 | orchestrator | 20:47:40.372 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-27 20:47:40.373026 | orchestrator | 20:47:40.372 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.373059 | orchestrator | 20:47:40.373 STDOUT terraform:  + image_id = (known after apply) 2025-09-27 20:47:40.373094 | orchestrator | 20:47:40.373 STDOUT terraform:  + image_name = (known after apply) 2025-09-27 20:47:40.373118 | orchestrator | 20:47:40.373 STDOUT terraform:  + key_pair = "testbed" 2025-09-27 20:47:40.373148 | orchestrator | 20:47:40.373 STDOUT terraform:  + name = "testbed-node-1" 2025-09-27 20:47:40.373171 | orchestrator | 20:47:40.373 STDOUT terraform:  + power_state = "active" 2025-09-27 20:47:40.373205 | orchestrator | 20:47:40.373 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.373237 | orchestrator | 20:47:40.373 STDOUT terraform:  + security_groups = (known after apply) 2025-09-27 20:47:40.373261 | orchestrator | 20:47:40.373 STDOUT terraform:  + stop_before_destroy = false 2025-09-27 20:47:40.373310 | orchestrator | 20:47:40.373 STDOUT terraform:  + updated = (known after apply) 2025-09-27 20:47:40.373356 | orchestrator | 20:47:40.373 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-27 20:47:40.373365 | orchestrator | 20:47:40.373 STDOUT terraform:  + block_device { 2025-09-27 20:47:40.373391 | orchestrator | 20:47:40.373 STDOUT terraform:  + boot_index = 0 2025-09-27 20:47:40.373420 | orchestrator | 20:47:40.373 STDOUT terraform:  + delete_on_termination = false 2025-09-27 20:47:40.373445 | orchestrator | 20:47:40.373 STDOUT terraform:  + destination_type = "volume" 2025-09-27 20:47:40.373472 | orchestrator | 20:47:40.373 STDOUT terraform:  + multiattach = false 2025-09-27 20:47:40.373499 | orchestrator | 20:47:40.373 STDOUT terraform:  + source_type = "volume" 2025-09-27 20:47:40.373536 | orchestrator | 20:47:40.373 STDOUT terraform:  + uuid = (known after apply) 2025-09-27 20:47:40.373544 | orchestrator | 20:47:40.373 STDOUT terraform:  } 2025-09-27 20:47:40.373550 | orchestrator | 20:47:40.373 STDOUT terraform:  + network { 2025-09-27 20:47:40.373576 | orchestrator | 20:47:40.373 STDOUT terraform:  + access_network = false 2025-09-27 20:47:40.373606 | orchestrator | 20:47:40.373 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-27 20:47:40.373638 | orchestrator | 20:47:40.373 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-27 20:47:40.373666 | orchestrator | 20:47:40.373 STDOUT terraform:  + mac = (known after apply) 2025-09-27 20:47:40.373696 | orchestrator | 20:47:40.373 STDOUT terraform:  + name = (known after apply) 2025-09-27 20:47:40.373725 | orchestrator | 20:47:40.373 STDOUT terraform:  + port = (known after apply) 2025-09-27 20:47:40.373755 | orchestrator | 20:47:40.373 STDOUT terraform:  + uuid = (known after apply) 2025-09-27 20:47:40.373762 | orchestrator | 20:47:40.373 STDOUT terraform:  } 2025-09-27 20:47:40.373769 | orchestrator | 20:47:40.373 STDOUT terraform:  } 2025-09-27 20:47:40.373832 | orchestrator | 20:47:40.373 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-09-27 20:47:40.373855 | orchestrator | 20:47:40.373 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-27 20:47:40.373888 | orchestrator | 20:47:40.373 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-27 20:47:40.373922 | orchestrator | 20:47:40.373 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-27 20:47:40.373955 | orchestrator | 20:47:40.373 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-27 20:47:40.373988 | orchestrator | 20:47:40.373 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 20:47:40.374009 | orchestrator | 20:47:40.373 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 20:47:40.374092 | orchestrator | 20:47:40.374 STDOUT terraform:  + config_drive = true 2025-09-27 20:47:40.374438 | orchestrator | 20:47:40.374 STDOUT terraform:  + created = (known after apply) 2025-09-27 20:47:40.375060 | orchestrator | 20:47:40.374 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-27 20:47:40.375391 | orchestrator | 20:47:40.375 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-27 20:47:40.375813 | orchestrator | 20:47:40.375 STDOUT terraform:  + force_delete = false 2025-09-27 20:47:40.376233 | orchestrator | 20:47:40.375 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-27 20:47:40.376861 | orchestrator | 20:47:40.376 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.377292 | orchestrator | 20:47:40.376 STDOUT terraform:  + image_id = (known after apply) 2025-09-27 20:47:40.377822 | orchestrator | 20:47:40.377 STDOUT terraform:  + image_name = (known after apply) 2025-09-27 20:47:40.377998 | orchestrator | 20:47:40.377 STDOUT terraform:  + key_pair = "testbed" 2025-09-27 20:47:40.378049 | orchestrator | 20:47:40.377 STDOUT terraform:  + name = "testbed-node-2" 2025-09-27 20:47:40.378057 | orchestrator | 20:47:40.378 STDOUT terraform:  + power_state = "active" 2025-09-27 20:47:40.378086 | orchestrator | 20:47:40.378 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.378122 | orchestrator | 20:47:40.378 STDOUT terraform:  + security_groups = (known after apply) 2025-09-27 20:47:40.378131 | orchestrator | 20:47:40.378 STDOUT terraform:  + stop_before_destroy = false 2025-09-27 20:47:40.378187 | orchestrator | 20:47:40.378 STDOUT terraform:  + updated = (known after apply) 2025-09-27 20:47:40.378226 | orchestrator | 20:47:40.378 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-27 20:47:40.378234 | orchestrator | 20:47:40.378 STDOUT terraform:  + block_device { 2025-09-27 20:47:40.378258 | orchestrator | 20:47:40.378 STDOUT terraform:  + boot_index = 0 2025-09-27 20:47:40.378297 | orchestrator | 20:47:40.378 STDOUT terraform:  + delete_on_termination = false 2025-09-27 20:47:40.378316 | orchestrator | 20:47:40.378 STDOUT terraform:  + destination_type = "volume" 2025-09-27 20:47:40.378325 | orchestrator | 20:47:40.378 STDOUT terraform:  + multiattach = false 2025-09-27 20:47:40.378365 | orchestrator | 20:47:40.378 STDOUT terraform:  + source_type = "volume" 2025-09-27 20:47:40.378401 | orchestrator | 20:47:40.378 STDOUT terraform:  + uuid = (known after apply) 2025-09-27 20:47:40.378413 | orchestrator | 20:47:40.378 STDOUT terraform:  } 2025-09-27 20:47:40.378420 | orchestrator | 20:47:40.378 STDOUT terraform:  + network { 2025-09-27 20:47:40.378427 | orchestrator | 20:47:40.378 STDOUT terraform:  + access_network = false 2025-09-27 20:47:40.378465 | orchestrator | 20:47:40.378 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-27 20:47:40.378489 | orchestrator | 20:47:40.378 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-27 20:47:40.378521 | orchestrator | 20:47:40.378 STDOUT terraform:  + mac = (known after apply) 2025-09-27 20:47:40.378547 | orchestrator | 20:47:40.378 STDOUT terraform:  + name = (known after apply) 2025-09-27 20:47:40.378574 | orchestrator | 20:47:40.378 STDOUT terraform:  + port = (known after apply) 2025-09-27 20:47:40.378605 | orchestrator | 20:47:40.378 STDOUT terraform:  + uuid = (known after apply) 2025-09-27 20:47:40.378612 | orchestrator | 20:47:40.378 STDOUT terraform:  } 2025-09-27 20:47:40.378622 | orchestrator | 20:47:40.378 STDOUT terraform:  } 2025-09-27 20:47:40.378665 | orchestrator | 20:47:40.378 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-09-27 20:47:40.378706 | orchestrator | 20:47:40.378 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-27 20:47:40.378739 | orchestrator | 20:47:40.378 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-27 20:47:40.378774 | orchestrator | 20:47:40.378 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-27 20:47:40.378807 | orchestrator | 20:47:40.378 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-27 20:47:40.378842 | orchestrator | 20:47:40.378 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 20:47:40.378851 | orchestrator | 20:47:40.378 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 20:47:40.378874 | orchestrator | 20:47:40.378 STDOUT terraform:  + config_drive = true 2025-09-27 20:47:40.378924 | orchestrator | 20:47:40.378 STDOUT terraform:  + created = (known after apply) 2025-09-27 20:47:40.378975 | orchestrator | 20:47:40.378 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-27 20:47:40.379002 | orchestrator | 20:47:40.378 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-27 20:47:40.379011 | orchestrator | 20:47:40.378 STDOUT terraform:  + force_delete = false 2025-09-27 20:47:40.379051 | orchestrator | 20:47:40.379 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-27 20:47:40.379084 | orchestrator | 20:47:40.379 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.379119 | orchestrator | 20:47:40.379 STDOUT terraform:  + image_id = (known after apply) 2025-09-27 20:47:40.379153 | orchestrator | 20:47:40.379 STDOUT terraform:  + image_name = (known after apply) 2025-09-27 20:47:40.379162 | orchestrator | 20:47:40.379 STDOUT terraform:  + key_pair = "testbed" 2025-09-27 20:47:40.379202 | orchestrator | 20:47:40.379 STDOUT terraform:  + name = "testbed-node-3" 2025-09-27 20:47:40.379211 | orchestrator | 20:47:40.379 STDOUT terraform:  + power_state = "active" 2025-09-27 20:47:40.379253 | orchestrator | 20:47:40.379 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.379302 | orchestrator | 20:47:40.379 STDOUT terraform:  + security_groups = (known after apply) 2025-09-27 20:47:40.379312 | orchestrator | 20:47:40.379 STDOUT terraform:  + stop_before_destroy = false 2025-09-27 20:47:40.379355 | orchestrator | 20:47:40.379 STDOUT terraform:  + updated = (known after apply) 2025-09-27 20:47:40.379405 | orchestrator | 20:47:40.379 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-27 20:47:40.379415 | orchestrator | 20:47:40.379 STDOUT terraform:  + block_device { 2025-09-27 20:47:40.379440 | orchestrator | 20:47:40.379 STDOUT terraform:  + boot_index = 0 2025-09-27 20:47:40.379449 | orchestrator | 20:47:40.379 STDOUT terraform:  + delete_on_termination = false 2025-09-27 20:47:40.379488 | orchestrator | 20:47:40.379 STDOUT terraform:  + destination_type = "volume" 2025-09-27 20:47:40.379514 | orchestrator | 20:47:40.379 STDOUT terraform:  + multiattach = false 2025-09-27 20:47:40.379538 | orchestrator | 20:47:40.379 STDOUT terraform:  + source_type = "volume" 2025-09-27 20:47:40.379574 | orchestrator | 20:47:40.379 STDOUT terraform:  + uuid = (known after apply) 2025-09-27 20:47:40.379583 | orchestrator | 20:47:40.379 STDOUT terraform:  } 2025-09-27 20:47:40.379590 | orchestrator | 20:47:40.379 STDOUT terraform:  + network { 2025-09-27 20:47:40.379598 | orchestrator | 20:47:40.379 STDOUT terraform:  + access_network = false 2025-09-27 20:47:40.379640 | orchestrator | 20:47:40.379 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-27 20:47:40.379665 | orchestrator | 20:47:40.379 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-27 20:47:40.379697 | orchestrator | 20:47:40.379 STDOUT terraform:  + mac = (known after apply) 2025-09-27 20:47:40.379722 | orchestrator | 20:47:40.379 STDOUT terraform:  + name = (known after apply) 2025-09-27 20:47:40.379748 | orchestrator | 20:47:40.379 STDOUT terraform:  + port = (known after apply) 2025-09-27 20:47:40.379782 | orchestrator | 20:47:40.379 STDOUT terraform:  + uuid = (known after apply) 2025-09-27 20:47:40.379789 | orchestrator | 20:47:40.379 STDOUT terraform:  } 2025-09-27 20:47:40.379797 | orchestrator | 20:47:40.379 STDOUT terraform:  } 2025-09-27 20:47:40.379841 | orchestrator | 20:47:40.379 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-09-27 20:47:40.379882 | orchestrator | 20:47:40.379 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-27 20:47:40.379914 | orchestrator | 20:47:40.379 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-27 20:47:40.379947 | orchestrator | 20:47:40.379 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-27 20:47:40.379980 | orchestrator | 20:47:40.379 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-27 20:47:40.380014 | orchestrator | 20:47:40.379 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 20:47:40.380023 | orchestrator | 20:47:40.380 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 20:47:40.380059 | orchestrator | 20:47:40.380 STDOUT terraform:  + config_drive = true 2025-09-27 20:47:40.380090 | orchestrator | 20:47:40.380 STDOUT terraform:  + created = (known after apply) 2025-09-27 20:47:40.380142 | orchestrator | 20:47:40.380 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-27 20:47:40.380157 | orchestrator | 20:47:40.380 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-27 20:47:40.380170 | orchestrator | 20:47:40.380 STDOUT terraform:  + force_delete = false 2025-09-27 20:47:40.380215 | orchestrator | 20:47:40.380 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-27 20:47:40.380248 | orchestrator | 20:47:40.380 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.380347 | orchestrator | 20:47:40.380 STDOUT terraform:  + image_id = (known after apply) 2025-09-27 20:47:40.380360 | orchestrator | 20:47:40.380 STDOUT terraform:  + image_name = (known after apply) 2025-09-27 20:47:40.380366 | orchestrator | 20:47:40.380 STDOUT terraform:  + key_pair = "testbed" 2025-09-27 20:47:40.380374 | orchestrator | 20:47:40.380 STDOUT terraform:  + name = "testbed-node-4" 2025-09-27 20:47:40.380382 | orchestrator | 20:47:40.380 STDOUT terraform:  + power_state = "active" 2025-09-27 20:47:40.380425 | orchestrator | 20:47:40.380 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.380460 | orchestrator | 20:47:40.380 STDOUT terraform:  + security_groups = (known after apply) 2025-09-27 20:47:40.380469 | orchestrator | 20:47:40.380 STDOUT terraform:  + stop_before_destroy = false 2025-09-27 20:47:40.380512 | orchestrator | 20:47:40.380 STDOUT terraform:  + updated = (known after apply) 2025-09-27 20:47:40.380559 | orchestrator | 20:47:40.380 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-27 20:47:40.380566 | orchestrator | 20:47:40.380 STDOUT terraform:  + block_device { 2025-09-27 20:47:40.380591 | orchestrator | 20:47:40.380 STDOUT terraform:  + boot_index = 0 2025-09-27 20:47:40.380602 | orchestrator | 20:47:40.380 STDOUT terraform:  + delete_on_termination = false 2025-09-27 20:47:40.380643 | orchestrator | 20:47:40.380 STDOUT terraform:  + destination_type = "volume" 2025-09-27 20:47:40.380669 | orchestrator | 20:47:40.380 STDOUT terraform:  + multiattach = false 2025-09-27 20:47:40.380694 | orchestrator | 20:47:40.380 STDOUT terraform:  + source_type = "volume" 2025-09-27 20:47:40.380729 | orchestrator | 20:47:40.380 STDOUT terraform:  + uuid = (known after apply) 2025-09-27 20:47:40.380737 | orchestrator | 20:47:40.380 STDOUT terraform:  } 2025-09-27 20:47:40.380744 | orchestrator | 20:47:40.380 STDOUT terraform:  + network { 2025-09-27 20:47:40.380753 | orchestrator | 20:47:40.380 STDOUT terraform:  + access_network = false 2025-09-27 20:47:40.380796 | orchestrator | 20:47:40.380 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-27 20:47:40.380821 | orchestrator | 20:47:40.380 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-27 20:47:40.380846 | orchestrator | 20:47:40.380 STDOUT terraform:  + mac = (known after apply) 2025-09-27 20:47:40.380861 | orchestrator | 20:47:40.380 STDOUT terraform:  + name = (known after apply) 2025-09-27 20:47:40.380903 | orchestrator | 20:47:40.380 STDOUT terraform:  + port = (known after apply) 2025-09-27 20:47:40.380929 | orchestrator | 20:47:40.380 STDOUT terraform:  + uuid = (known after apply) 2025-09-27 20:47:40.380935 | orchestrator | 20:47:40.380 STDOUT terraform:  } 2025-09-27 20:47:40.380943 | orchestrator | 20:47:40.380 STDOUT terraform:  } 2025-09-27 20:47:40.380988 | orchestrator | 20:47:40.380 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-09-27 20:47:40.381028 | orchestrator | 20:47:40.380 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-27 20:47:40.381062 | orchestrator | 20:47:40.381 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-27 20:47:40.381095 | orchestrator | 20:47:40.381 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-27 20:47:40.381128 | orchestrator | 20:47:40.381 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-27 20:47:40.381162 | orchestrator | 20:47:40.381 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 20:47:40.381171 | orchestrator | 20:47:40.381 STDOUT terraform:  + availability_zone = "nova" 2025-09-27 20:47:40.381196 | orchestrator | 20:47:40.381 STDOUT terraform:  + config_drive = true 2025-09-27 20:47:40.381230 | orchestrator | 20:47:40.381 STDOUT terraform:  + created = (known after apply) 2025-09-27 20:47:40.381448 | orchestrator | 20:47:40.381 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-27 20:47:40.381461 | orchestrator | 20:47:40.381 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-27 20:47:40.381471 | orchestrator | 20:47:40.381 STDOUT terraform:  + force_delete = false 2025-09-27 20:47:40.381477 | orchestrator | 20:47:40.381 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-27 20:47:40.381483 | orchestrator | 20:47:40.381 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.381488 | orchestrator | 20:47:40.381 STDOUT terraform:  + image_id = (known after apply) 2025-09-27 20:47:40.381512 | orchestrator | 20:47:40.381 STDOUT terraform:  + image_name = (known after apply) 2025-09-27 20:47:40.381519 | orchestrator | 20:47:40.381 STDOUT terraform:  + key_pair = "testbed" 2025-09-27 20:47:40.381525 | orchestrator | 20:47:40.381 STDOUT terraform:  + name = "testbed-node-5" 2025-09-27 20:47:40.381533 | orchestrator | 20:47:40.381 STDOUT terraform:  + power_state = "active" 2025-09-27 20:47:40.381539 | orchestrator | 20:47:40.381 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.381547 | orchestrator | 20:47:40.381 STDOUT terraform:  + security_groups = (known after apply) 2025-09-27 20:47:40.381581 | orchestrator | 20:47:40.381 STDOUT terraform:  + stop_before_destroy = false 2025-09-27 20:47:40.381608 | orchestrator | 20:47:40.381 STDOUT terraform:  + updated = (known after apply) 2025-09-27 20:47:40.381659 | orchestrator | 20:47:40.381 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-27 20:47:40.381668 | orchestrator | 20:47:40.381 STDOUT terraform:  + block_device { 2025-09-27 20:47:40.381683 | orchestrator | 20:47:40.381 STDOUT terraform:  + boot_index = 0 2025-09-27 20:47:40.381719 | orchestrator | 20:47:40.381 STDOUT terraform:  + delete_on_termination = false 2025-09-27 20:47:40.381752 | orchestrator | 20:47:40.381 STDOUT terraform:  + destination_type = "volume" 2025-09-27 20:47:40.381761 | orchestrator | 20:47:40.381 STDOUT terraform:  + multiattach = false 2025-09-27 20:47:40.381797 | orchestrator | 20:47:40.381 STDOUT terraform:  + source_type = "volume" 2025-09-27 20:47:40.382984 | orchestrator | 20:47:40.381 STDOUT terraform:  + uuid = (known after apply) 2025-09-27 20:47:40.383021 | orchestrator | 20:47:40.381 STDOUT terraform:  } 2025-09-27 20:47:40.383028 | orchestrator | 20:47:40.381 STDOUT terraform:  + network { 2025-09-27 20:47:40.383034 | orchestrator | 20:47:40.381 STDOUT terraform:  + access_network = false 2025-09-27 20:47:40.383040 | orchestrator | 20:47:40.381 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-27 20:47:40.383047 | orchestrator | 20:47:40.381 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-27 20:47:40.383053 | orchestrator | 20:47:40.381 STDOUT terraform:  + mac = (known after apply) 2025-09-27 20:47:40.383059 | orchestrator | 20:47:40.381 STDOUT terraform:  + name = (known after apply) 2025-09-27 20:47:40.383070 | orchestrator | 20:47:40.381 STDOUT terraform:  + port = (known after apply) 2025-09-27 20:47:40.383076 | orchestrator | 20:47:40.382 STDOUT terraform:  + uuid = (known after apply) 2025-09-27 20:47:40.383083 | orchestrator | 20:47:40.382 STDOUT terraform:  } 2025-09-27 20:47:40.383089 | orchestrator | 20:47:40.382 STDOUT terraform:  } 2025-09-27 20:47:40.383208 | orchestrator | 20:47:40.382 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-09-27 20:47:40.383675 | orchestrator | 20:47:40.383 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-09-27 20:47:40.391272 | orchestrator | 20:47:40.383 STDOUT terraform:  + fingerprint = (known after apply) 2025-09-27 20:47:40.391323 | orchestrator | 20:47:40.384 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.391328 | orchestrator | 20:47:40.384 STDOUT terraform:  + name = "testbed" 2025-09-27 20:47:40.391332 | orchestrator | 20:47:40.384 STDOUT terraform:  + private_key = (sensitive value) 2025-09-27 20:47:40.391336 | orchestrator | 20:47:40.385 STDOUT terraform:  + public_key = (known after apply) 2025-09-27 20:47:40.391340 | orchestrator | 20:47:40.385 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.391349 | orchestrator | 20:47:40.385 STDOUT terraform:  + user_id = (known after apply) 2025-09-27 20:47:40.391353 | orchestrator | 20:47:40.385 STDOUT terraform:  } 2025-09-27 20:47:40.391357 | orchestrator | 20:47:40.385 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-09-27 20:47:40.391363 | orchestrator | 20:47:40.385 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-27 20:47:40.391367 | orchestrator | 20:47:40.385 STDOUT terraform:  + device = (known after apply) 2025-09-27 20:47:40.391370 | orchestrator | 20:47:40.385 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.391393 | orchestrator | 20:47:40.385 STDOUT terraform:  + instance_id = (known after apply) 2025-09-27 20:47:40.391397 | orchestrator | 20:47:40.385 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.391401 | orchestrator | 20:47:40.385 STDOUT terraform:  + volume_id = (known after apply) 2025-09-27 20:47:40.391404 | orchestrator | 20:47:40.385 STDOUT terraform:  } 2025-09-27 20:47:40.391408 | orchestrator | 20:47:40.385 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-09-27 20:47:40.391412 | orchestrator | 20:47:40.385 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-27 20:47:40.391416 | orchestrator | 20:47:40.385 STDOUT terraform:  + device = (known after apply) 2025-09-27 20:47:40.391419 | orchestrator | 20:47:40.385 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.391423 | orchestrator | 20:47:40.385 STDOUT terraform:  + instance_id = (known after apply) 2025-09-27 20:47:40.391427 | orchestrator | 20:47:40.386 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.391430 | orchestrator | 20:47:40.386 STDOUT terraform:  + volume_id = (known after apply) 2025-09-27 20:47:40.391434 | orchestrator | 20:47:40.386 STDOUT terraform:  } 2025-09-27 20:47:40.391438 | orchestrator | 20:47:40.386 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-09-27 20:47:40.391441 | orchestrator | 20:47:40.386 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-27 20:47:40.391445 | orchestrator | 20:47:40.386 STDOUT terraform:  + device = (known after apply) 2025-09-27 20:47:40.391449 | orchestrator | 20:47:40.386 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.391452 | orchestrator | 20:47:40.386 STDOUT terraform:  + instance_id = (known after apply) 2025-09-27 20:47:40.391456 | orchestrator | 20:47:40.386 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.391460 | orchestrator | 20:47:40.386 STDOUT terraform:  + volume_id = (known after apply) 2025-09-27 20:47:40.391463 | orchestrator | 20:47:40.386 STDOUT terraform:  } 2025-09-27 20:47:40.391467 | orchestrator | 20:47:40.386 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-09-27 20:47:40.391471 | orchestrator | 20:47:40.386 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-27 20:47:40.391475 | orchestrator | 20:47:40.386 STDOUT terraform:  + device = (known after apply) 2025-09-27 20:47:40.391478 | orchestrator | 20:47:40.386 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.391482 | orchestrator | 20:47:40.386 STDOUT terraform:  + instance_id = (known after apply) 2025-09-27 20:47:40.391493 | orchestrator | 20:47:40.386 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.391497 | orchestrator | 20:47:40.386 STDOUT terraform:  + volume_id = (known after apply) 2025-09-27 20:47:40.391501 | orchestrator | 20:47:40.386 STDOUT terraform:  } 2025-09-27 20:47:40.391505 | orchestrator | 20:47:40.386 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-09-27 20:47:40.391512 | orchestrator | 20:47:40.386 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-27 20:47:40.391516 | orchestrator | 20:47:40.386 STDOUT terraform:  + device = (known after apply) 2025-09-27 20:47:40.391520 | orchestrator | 20:47:40.386 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.391524 | orchestrator | 20:47:40.386 STDOUT terraform:  + instance_id = (known after apply) 2025-09-27 20:47:40.391527 | orchestrator | 20:47:40.386 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.391531 | orchestrator | 20:47:40.386 STDOUT terraform:  + volume_id = (known after apply) 2025-09-27 20:47:40.391535 | orchestrator | 20:47:40.386 STDOUT terraform:  } 2025-09-27 20:47:40.391539 | orchestrator | 20:47:40.386 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-09-27 20:47:40.391543 | orchestrator | 20:47:40.386 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-27 20:47:40.391546 | orchestrator | 20:47:40.386 STDOUT terraform:  + device = (known after apply) 2025-09-27 20:47:40.391550 | orchestrator | 20:47:40.386 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.391554 | orchestrator | 20:47:40.386 STDOUT terraform:  + instance_id = (known after apply) 2025-09-27 20:47:40.391558 | orchestrator | 20:47:40.386 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.391561 | orchestrator | 20:47:40.386 STDOUT terraform:  + volume_id = (known after apply) 2025-09-27 20:47:40.391565 | orchestrator | 20:47:40.386 STDOUT terraform:  } 2025-09-27 20:47:40.391569 | orchestrator | 20:47:40.386 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-09-27 20:47:40.391573 | orchestrator | 20:47:40.386 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-27 20:47:40.391576 | orchestrator | 20:47:40.387 STDOUT terraform:  + device = (known after apply) 2025-09-27 20:47:40.391580 | orchestrator | 20:47:40.387 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.391584 | orchestrator | 20:47:40.387 STDOUT terraform:  + instance_id = (known after apply) 2025-09-27 20:47:40.391588 | orchestrator | 20:47:40.387 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.391591 | orchestrator | 20:47:40.387 STDOUT terraform:  + volume_id = (known after apply) 2025-09-27 20:47:40.391595 | orchestrator | 20:47:40.387 STDOUT terraform:  } 2025-09-27 20:47:40.391599 | orchestrator | 20:47:40.387 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-09-27 20:47:40.391603 | orchestrator | 20:47:40.387 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-27 20:47:40.391607 | orchestrator | 20:47:40.387 STDOUT terraform:  + device = (known after apply) 2025-09-27 20:47:40.391610 | orchestrator | 20:47:40.387 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.391614 | orchestrator | 20:47:40.387 STDOUT terraform:  + instance_id = (known after apply) 2025-09-27 20:47:40.391618 | orchestrator | 20:47:40.387 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.391626 | orchestrator | 20:47:40.387 STDOUT terraform:  + volume_id = (known after apply) 2025-09-27 20:47:40.391630 | orchestrator | 20:47:40.387 STDOUT terraform:  } 2025-09-27 20:47:40.391634 | orchestrator | 20:47:40.387 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-09-27 20:47:40.391637 | orchestrator | 20:47:40.387 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-27 20:47:40.391647 | orchestrator | 20:47:40.387 STDOUT terraform:  + device = (known after apply) 2025-09-27 20:47:40.391651 | orchestrator | 20:47:40.387 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.391655 | orchestrator | 20:47:40.387 STDOUT terraform:  + instance_id = (known after apply) 2025-09-27 20:47:40.391659 | orchestrator | 20:47:40.387 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.391663 | orchestrator | 20:47:40.387 STDOUT terraform:  + volume_id = (known after apply) 2025-09-27 20:47:40.391666 | orchestrator | 20:47:40.387 STDOUT terraform:  } 2025-09-27 20:47:40.391675 | orchestrator | 20:47:40.387 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-09-27 20:47:40.391680 | orchestrator | 20:47:40.387 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-09-27 20:47:40.391684 | orchestrator | 20:47:40.387 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-27 20:47:40.391687 | orchestrator | 20:47:40.387 STDOUT terraform:  + floating_ip = (known after apply) 2025-09-27 20:47:40.391691 | orchestrator | 20:47:40.387 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.391695 | orchestrator | 20:47:40.387 STDOUT terraform:  + port_id = (known after apply) 2025-09-27 20:47:40.391699 | orchestrator | 20:47:40.387 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.391702 | orchestrator | 20:47:40.387 STDOUT terraform:  } 2025-09-27 20:47:40.391706 | orchestrator | 20:47:40.387 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-09-27 20:47:40.391710 | orchestrator | 20:47:40.387 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-09-27 20:47:40.391714 | orchestrator | 20:47:40.387 STDOUT terraform:  + address = (known after apply) 2025-09-27 20:47:40.391718 | orchestrator | 20:47:40.387 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 20:47:40.391721 | orchestrator | 20:47:40.387 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-27 20:47:40.391725 | orchestrator | 20:47:40.387 STDOUT terraform:  + dns_name = (known after apply) 2025-09-27 20:47:40.391729 | orchestrator | 20:47:40.387 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-27 20:47:40.391733 | orchestrator | 20:47:40.387 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.391737 | orchestrator | 20:47:40.387 STDOUT terraform:  + pool = "public" 2025-09-27 20:47:40.391741 | orchestrator | 20:47:40.387 STDOUT terraform:  + port_id = (known after apply) 2025-09-27 20:47:40.391744 | orchestrator | 20:47:40.388 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.391748 | orchestrator | 20:47:40.388 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-27 20:47:40.391755 | orchestrator | 20:47:40.388 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 20:47:40.391759 | orchestrator | 20:47:40.388 STDOUT terraform:  } 2025-09-27 20:47:40.391763 | orchestrator | 20:47:40.388 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-09-27 20:47:40.391766 | orchestrator | 20:47:40.388 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-09-27 20:47:40.391770 | orchestrator | 20:47:40.388 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-27 20:47:40.391774 | orchestrator | 20:47:40.388 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 20:47:40.391778 | orchestrator | 20:47:40.388 STDOUT terraform:  + availability_zone_hints = [ 2025-09-27 20:47:40.391782 | orchestrator | 20:47:40.388 STDOUT terraform:  + "nova", 2025-09-27 20:47:40.391786 | orchestrator | 20:47:40.388 STDOUT terraform:  ] 2025-09-27 20:47:40.391790 | orchestrator | 20:47:40.388 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-27 20:47:40.391793 | orchestrator | 20:47:40.388 STDOUT terraform:  + external = (known after apply) 2025-09-27 20:47:40.391800 | orchestrator | 20:47:40.388 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.391804 | orchestrator | 20:47:40.388 STDOUT terraform:  + mtu = (known after apply) 2025-09-27 20:47:40.391807 | orchestrator | 20:47:40.388 STDOUT terraform:  + name = "net-testbed-management" 2025-09-27 20:47:40.391811 | orchestrator | 20:47:40.388 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-27 20:47:40.391815 | orchestrator | 20:47:40.388 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-27 20:47:40.391818 | orchestrator | 20:47:40.388 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.391824 | orchestrator | 20:47:40.388 STDOUT terraform:  + shared = (known after apply) 2025-09-27 20:47:40.391828 | orchestrator | 20:47:40.388 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 20:47:40.391832 | orchestrator | 20:47:40.388 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-09-27 20:47:40.391835 | orchestrator | 20:47:40.388 STDOUT terraform:  + segments (known after apply) 2025-09-27 20:47:40.391839 | orchestrator | 20:47:40.388 STDOUT terraform:  } 2025-09-27 20:47:40.391843 | orchestrator | 20:47:40.388 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-09-27 20:47:40.391847 | orchestrator | 20:47:40.388 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-09-27 20:47:40.391850 | orchestrator | 20:47:40.388 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-27 20:47:40.391854 | orchestrator | 20:47:40.388 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-27 20:47:40.391858 | orchestrator | 20:47:40.388 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-27 20:47:40.391862 | orchestrator | 20:47:40.388 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 20:47:40.391866 | orchestrator | 20:47:40.388 STDOUT terraform:  + device_id = (known after apply) 2025-09-27 20:47:40.391872 | orchestrator | 20:47:40.388 STDOUT terraform:  + device_owner = (known after apply) 2025-09-27 20:47:40.391876 | orchestrator | 20:47:40.388 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-27 20:47:40.391879 | orchestrator | 20:47:40.388 STDOUT terraform:  + dns_name = (known after apply) 2025-09-27 20:47:40.391883 | orchestrator | 20:47:40.388 STDOUT terraform:  + id 2025-09-27 20:47:40.391887 | orchestrator | 20:47:40.388 STDOUT terraform:  = (known after apply) 2025-09-27 20:47:40.391891 | orchestrator | 20:47:40.389 STDOUT terraform:  + mac_address = (known after apply) 2025-09-27 20:47:40.391895 | orchestrator | 20:47:40.389 STDOUT terraform:  + network_id = (known after apply) 2025-09-27 20:47:40.391898 | orchestrator | 20:47:40.389 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-27 20:47:40.391902 | orchestrator | 20:47:40.389 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-27 20:47:40.391906 | orchestrator | 20:47:40.389 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.391910 | orchestrator | 20:47:40.389 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-27 20:47:40.391913 | orchestrator | 20:47:40.389 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 20:47:40.391917 | orchestrator | 20:47:40.389 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 20:47:40.391921 | orchestrator | 20:47:40.389 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-09-27 20:47:40.391925 | orchestrator | 20:47:40.389 STDOUT terraform:  } 2025-09-27 20:47:40.391929 | orchestrator | 20:47:40.389 STDOUT terraform:  + binding (known after apply) 2025-09-27 20:47:40.391932 | orchestrator | 20:47:40.389 STDOUT terraform:  + fixed_ip { 2025-09-27 20:47:40.391936 | orchestrator | 20:47:40.389 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-09-27 20:47:40.391940 | orchestrator | 20:47:40.389 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-27 20:47:40.391944 | orchestrator | 20:47:40.389 STDOUT terraform:  } 2025-09-27 20:47:40.391950 | orchestrator | 20:47:40.389 STDOUT terraform:  } 2025-09-27 20:47:40.391954 | orchestrator | 20:47:40.389 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-09-27 20:47:40.391958 | orchestrator | 20:47:40.389 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-27 20:47:40.391961 | orchestrator | 20:47:40.389 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-27 20:47:40.391965 | orchestrator | 20:47:40.389 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-27 20:47:40.391969 | orchestrator | 20:47:40.389 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-27 20:47:40.391972 | orchestrator | 20:47:40.389 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 20:47:40.391976 | orchestrator | 20:47:40.389 STDOUT terraform:  + device_id = (known after apply) 2025-09-27 20:47:40.391980 | orchestrator | 20:47:40.389 STDOUT terraform:  + device_owner = (known after apply) 2025-09-27 20:47:40.391984 | orchestrator | 20:47:40.389 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-27 20:47:40.391990 | orchestrator | 20:47:40.389 STDOUT terraform:  + dns_name = (known after apply) 2025-09-27 20:47:40.391994 | orchestrator | 20:47:40.389 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.391998 | orchestrator | 20:47:40.389 STDOUT terraform:  + mac_address = (known after apply) 2025-09-27 20:47:40.392002 | orchestrator | 20:47:40.389 STDOUT terraform:  + network_id = (known after apply) 2025-09-27 20:47:40.392008 | orchestrator | 20:47:40.389 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-27 20:47:40.392011 | orchestrator | 20:47:40.389 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-27 20:47:40.392015 | orchestrator | 20:47:40.389 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.392019 | orchestrator | 20:47:40.389 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-27 20:47:40.392023 | orchestrator | 20:47:40.389 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 20:47:40.392026 | orchestrator | 20:47:40.389 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 20:47:40.392030 | orchestrator | 20:47:40.389 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-09-27 20:47:40.392034 | orchestrator | 20:47:40.389 STDOUT terraform:  } 2025-09-27 20:47:40.392038 | orchestrator | 20:47:40.389 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 20:47:40.392042 | orchestrator | 20:47:40.390 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-09-27 20:47:40.392045 | orchestrator | 20:47:40.390 STDOUT terraform:  } 2025-09-27 20:47:40.392049 | orchestrator | 20:47:40.390 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 20:47:40.392053 | orchestrator | 20:47:40.390 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-09-27 20:47:40.392057 | orchestrator | 20:47:40.391 STDOUT terraform:  } 2025-09-27 20:47:40.392060 | orchestrator | 20:47:40.391 STDOUT terraform:  + binding (known after apply) 2025-09-27 20:47:40.392064 | orchestrator | 20:47:40.391 STDOUT terraform:  + fixed_ip { 2025-09-27 20:47:40.392068 | orchestrator | 20:47:40.391 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-09-27 20:47:40.392073 | orchestrator | 20:47:40.391 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-27 20:47:40.392077 | orchestrator | 20:47:40.391 STDOUT terraform:  } 2025-09-27 20:47:40.392081 | orchestrator | 20:47:40.391 STDOUT terraform:  } 2025-09-27 20:47:40.392085 | orchestrator | 20:47:40.392 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-09-27 20:47:40.392090 | orchestrator | 20:47:40.392 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-27 20:47:40.392126 | orchestrator | 20:47:40.392 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-27 20:47:40.392176 | orchestrator | 20:47:40.392 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-27 20:47:40.392184 | orchestrator | 20:47:40.392 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-27 20:47:40.392231 | orchestrator | 20:47:40.392 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 20:47:40.392262 | orchestrator | 20:47:40.392 STDOUT terraform:  + device_id = (known after apply) 2025-09-27 20:47:40.392303 | orchestrator | 20:47:40.392 STDOUT terraform:  + device_owner = (known after apply) 2025-09-27 20:47:40.392337 | orchestrator | 20:47:40.392 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-27 20:47:40.392378 | orchestrator | 20:47:40.392 STDOUT terraform:  + dns_name = (known after apply) 2025-09-27 20:47:40.392411 | orchestrator | 20:47:40.392 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.392446 | orchestrator | 20:47:40.392 STDOUT terraform:  + mac_address = (known after apply) 2025-09-27 20:47:40.392480 | orchestrator | 20:47:40.392 STDOUT terraform:  + network_id = (known after apply) 2025-09-27 20:47:40.392515 | orchestrator | 20:47:40.392 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-27 20:47:40.392549 | orchestrator | 20:47:40.392 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-27 20:47:40.392589 | orchestrator | 20:47:40.392 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.392617 | orchestrator | 20:47:40.392 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-27 20:47:40.392668 | orchestrator | 20:47:40.392 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 20:47:40.392718 | orchestrator | 20:47:40.392 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 20:47:40.392756 | orchestrator | 20:47:40.392 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-09-27 20:47:40.392763 | orchestrator | 20:47:40.392 STDOUT terraform:  } 2025-09-27 20:47:40.392780 | orchestrator | 20:47:40.392 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 20:47:40.392809 | orchestrator | 20:47:40.392 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-09-27 20:47:40.392815 | orchestrator | 20:47:40.392 STDOUT terraform:  } 2025-09-27 20:47:40.392836 | orchestrator | 20:47:40.392 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 20:47:40.392864 | orchestrator | 20:47:40.392 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-09-27 20:47:40.392870 | orchestrator | 20:47:40.392 STDOUT terraform:  } 2025-09-27 20:47:40.392895 | orchestrator | 20:47:40.392 STDOUT terraform:  + binding (known after apply) 2025-09-27 20:47:40.392903 | orchestrator | 20:47:40.392 STDOUT terraform:  + fixed_ip { 2025-09-27 20:47:40.392931 | orchestrator | 20:47:40.392 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-09-27 20:47:40.392960 | orchestrator | 20:47:40.392 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-27 20:47:40.392967 | orchestrator | 20:47:40.392 STDOUT terraform:  } 2025-09-27 20:47:40.392984 | orchestrator | 20:47:40.392 STDOUT terraform:  } 2025-09-27 20:47:40.393026 | orchestrator | 20:47:40.392 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-09-27 20:47:40.393069 | orchestrator | 20:47:40.393 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-27 20:47:40.393106 | orchestrator | 20:47:40.393 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-27 20:47:40.393139 | orchestrator | 20:47:40.393 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-27 20:47:40.393171 | orchestrator | 20:47:40.393 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-27 20:47:40.393205 | orchestrator | 20:47:40.393 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 20:47:40.393239 | orchestrator | 20:47:40.393 STDOUT terraform:  + device_id = (known after apply) 2025-09-27 20:47:40.393275 | orchestrator | 20:47:40.393 STDOUT terraform:  + device_owner = (known after apply) 2025-09-27 20:47:40.393328 | orchestrator | 20:47:40.393 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-27 20:47:40.393363 | orchestrator | 20:47:40.393 STDOUT terraform:  + dns_name = (known after apply) 2025-09-27 20:47:40.393398 | orchestrator | 20:47:40.393 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.393431 | orchestrator | 20:47:40.393 STDOUT terraform:  + mac_address = (known after apply) 2025-09-27 20:47:40.393465 | orchestrator | 20:47:40.393 STDOUT terraform:  + network_id = (known after apply) 2025-09-27 20:47:40.393499 | orchestrator | 20:47:40.393 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-27 20:47:40.393533 | orchestrator | 20:47:40.393 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-27 20:47:40.393567 | orchestrator | 20:47:40.393 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.393599 | orchestrator | 20:47:40.393 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-27 20:47:40.393653 | orchestrator | 20:47:40.393 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 20:47:40.393688 | orchestrator | 20:47:40.393 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 20:47:40.393717 | orchestrator | 20:47:40.393 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-09-27 20:47:40.393723 | orchestrator | 20:47:40.393 STDOUT terraform:  } 2025-09-27 20:47:40.393746 | orchestrator | 20:47:40.393 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 20:47:40.393773 | orchestrator | 20:47:40.393 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-09-27 20:47:40.393779 | orchestrator | 20:47:40.393 STDOUT terraform:  } 2025-09-27 20:47:40.393802 | orchestrator | 20:47:40.393 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 20:47:40.393835 | orchestrator | 20:47:40.393 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-09-27 20:47:40.393840 | orchestrator | 20:47:40.393 STDOUT terraform:  } 2025-09-27 20:47:40.393863 | orchestrator | 20:47:40.393 STDOUT terraform:  + binding (known after apply) 2025-09-27 20:47:40.393869 | orchestrator | 20:47:40.393 STDOUT terraform:  + fixed_ip { 2025-09-27 20:47:40.393899 | orchestrator | 20:47:40.393 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-09-27 20:47:40.393927 | orchestrator | 20:47:40.393 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-27 20:47:40.393934 | orchestrator | 20:47:40.393 STDOUT terraform:  } 2025-09-27 20:47:40.393950 | orchestrator | 20:47:40.393 STDOUT terraform:  } 2025-09-27 20:47:40.393994 | orchestrator | 20:47:40.393 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-09-27 20:47:40.394056 | orchestrator | 20:47:40.393 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-27 20:47:40.394085 | orchestrator | 20:47:40.394 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-27 20:47:40.394119 | orchestrator | 20:47:40.394 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-27 20:47:40.394153 | orchestrator | 20:47:40.394 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-27 20:47:40.394194 | orchestrator | 20:47:40.394 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 20:47:40.394224 | orchestrator | 20:47:40.394 STDOUT terraform:  + device_id = (known after apply) 2025-09-27 20:47:40.394258 | orchestrator | 20:47:40.394 STDOUT terraform:  + device_owner = (known after apply) 2025-09-27 20:47:40.394303 | orchestrator | 20:47:40.394 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-27 20:47:40.394338 | orchestrator | 20:47:40.394 STDOUT terraform:  + dns_name = (known after apply) 2025-09-27 20:47:40.394370 | orchestrator | 20:47:40.394 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.394405 | orchestrator | 20:47:40.394 STDOUT terraform:  + mac_address = (known after apply) 2025-09-27 20:47:40.394438 | orchestrator | 20:47:40.394 STDOUT terraform:  + network_id = (known after apply) 2025-09-27 20:47:40.394472 | orchestrator | 20:47:40.394 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-27 20:47:40.394506 | orchestrator | 20:47:40.394 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-27 20:47:40.394549 | orchestrator | 20:47:40.394 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.394577 | orchestrator | 20:47:40.394 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-27 20:47:40.394613 | orchestrator | 20:47:40.394 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 20:47:40.394633 | orchestrator | 20:47:40.394 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 20:47:40.394661 | orchestrator | 20:47:40.394 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-09-27 20:47:40.395737 | orchestrator | 20:47:40.394 STDOUT terraform:  } 2025-09-27 20:47:40.402114 | orchestrator | 20:47:40.395 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 20:47:40.402255 | orchestrator | 20:47:40.402 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-09-27 20:47:40.402338 | orchestrator | 20:47:40.402 STDOUT terraform:  } 2025-09-27 20:47:40.402352 | orchestrator | 20:47:40.402 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 20:47:40.402370 | orchestrator | 20:47:40.402 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-09-27 20:47:40.402381 | orchestrator | 20:47:40.402 STDOUT terraform:  } 2025-09-27 20:47:40.402391 | orchestrator | 20:47:40.402 STDOUT terraform:  + binding (known after apply) 2025-09-27 20:47:40.402401 | orchestrator | 20:47:40.402 STDOUT terraform:  + fixed_ip { 2025-09-27 20:47:40.402411 | orchestrator | 20:47:40.402 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-09-27 20:47:40.402424 | orchestrator | 20:47:40.402 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-27 20:47:40.402450 | orchestrator | 20:47:40.402 STDOUT terraform:  } 2025-09-27 20:47:40.402464 | orchestrator | 20:47:40.402 STDOUT terraform:  } 2025-09-27 20:47:40.402504 | orchestrator | 20:47:40.402 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-09-27 20:47:40.402602 | orchestrator | 20:47:40.402 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-27 20:47:40.402621 | orchestrator | 20:47:40.402 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-27 20:47:40.402663 | orchestrator | 20:47:40.402 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-27 20:47:40.402713 | orchestrator | 20:47:40.402 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-27 20:47:40.402764 | orchestrator | 20:47:40.402 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 20:47:40.402812 | orchestrator | 20:47:40.402 STDOUT terraform:  + device_id = (known after apply) 2025-09-27 20:47:40.402862 | orchestrator | 20:47:40.402 STDOUT terraform:  + device_owner = (known after apply) 2025-09-27 20:47:40.402909 | orchestrator | 20:47:40.402 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-27 20:47:40.402966 | orchestrator | 20:47:40.402 STDOUT terraform:  + dns_name = (known after apply) 2025-09-27 20:47:40.403037 | orchestrator | 20:47:40.402 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.403085 | orchestrator | 20:47:40.403 STDOUT terraform:  + mac_address = (known after apply) 2025-09-27 20:47:40.403137 | orchestrator | 20:47:40.403 STDOUT terraform:  + network_id = (known after apply) 2025-09-27 20:47:40.403183 | orchestrator | 20:47:40.403 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-27 20:47:40.403232 | orchestrator | 20:47:40.403 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-27 20:47:40.403295 | orchestrator | 20:47:40.403 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.403348 | orchestrator | 20:47:40.403 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-27 20:47:40.403397 | orchestrator | 20:47:40.403 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 20:47:40.403427 | orchestrator | 20:47:40.403 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 20:47:40.403467 | orchestrator | 20:47:40.403 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-09-27 20:47:40.403486 | orchestrator | 20:47:40.403 STDOUT terraform:  } 2025-09-27 20:47:40.403511 | orchestrator | 20:47:40.403 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 20:47:40.403549 | orchestrator | 20:47:40.403 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-09-27 20:47:40.403568 | orchestrator | 20:47:40.403 STDOUT terraform:  } 2025-09-27 20:47:40.403595 | orchestrator | 20:47:40.403 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 20:47:40.403633 | orchestrator | 20:47:40.403 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-09-27 20:47:40.403657 | orchestrator | 20:47:40.403 STDOUT terraform:  } 2025-09-27 20:47:40.403684 | orchestrator | 20:47:40.403 STDOUT terraform:  + binding (known after apply) 2025-09-27 20:47:40.403697 | orchestrator | 20:47:40.403 STDOUT terraform:  + fixed_ip { 2025-09-27 20:47:40.403731 | orchestrator | 20:47:40.403 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-09-27 20:47:40.403769 | orchestrator | 20:47:40.403 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-27 20:47:40.403788 | orchestrator | 20:47:40.403 STDOUT terraform:  } 2025-09-27 20:47:40.403808 | orchestrator | 20:47:40.403 STDOUT terraform:  } 2025-09-27 20:47:40.403872 | orchestrator | 20:47:40.403 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-09-27 20:47:40.403932 | orchestrator | 20:47:40.403 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-27 20:47:40.403987 | orchestrator | 20:47:40.403 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-27 20:47:40.404034 | orchestrator | 20:47:40.403 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-27 20:47:40.404079 | orchestrator | 20:47:40.404 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-27 20:47:40.404128 | orchestrator | 20:47:40.404 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 20:47:40.404176 | orchestrator | 20:47:40.404 STDOUT terraform:  + device_id = (known after apply) 2025-09-27 20:47:40.404230 | orchestrator | 20:47:40.404 STDOUT terraform:  + device_owner = (known after apply) 2025-09-27 20:47:40.404301 | orchestrator | 20:47:40.404 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-27 20:47:40.404339 | orchestrator | 20:47:40.404 STDOUT terraform:  + dns_name = (known after apply) 2025-09-27 20:47:40.404390 | orchestrator | 20:47:40.404 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.404457 | orchestrator | 20:47:40.404 STDOUT terraform:  + mac_address = (known after apply) 2025-09-27 20:47:40.404508 | orchestrator | 20:47:40.404 STDOUT terraform:  + network_id = (known after apply) 2025-09-27 20:47:40.404556 | orchestrator | 20:47:40.404 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-27 20:47:40.404613 | orchestrator | 20:47:40.404 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-27 20:47:40.404655 | orchestrator | 20:47:40.404 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.404706 | orchestrator | 20:47:40.404 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-27 20:47:40.404753 | orchestrator | 20:47:40.404 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 20:47:40.404780 | orchestrator | 20:47:40.404 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 20:47:40.404820 | orchestrator | 20:47:40.404 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-09-27 20:47:40.404840 | orchestrator | 20:47:40.404 STDOUT terraform:  } 2025-09-27 20:47:40.404867 | orchestrator | 20:47:40.404 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 20:47:40.405244 | orchestrator | 20:47:40.404 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-09-27 20:47:40.405253 | orchestrator | 20:47:40.404 STDOUT terraform:  } 2025-09-27 20:47:40.405257 | orchestrator | 20:47:40.404 STDOUT terraform:  + allowed_address_pairs { 2025-09-27 20:47:40.405267 | orchestrator | 20:47:40.404 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-09-27 20:47:40.405271 | orchestrator | 20:47:40.404 STDOUT terraform:  } 2025-09-27 20:47:40.405275 | orchestrator | 20:47:40.404 STDOUT terraform:  + binding (known after apply) 2025-09-27 20:47:40.405289 | orchestrator | 20:47:40.405 STDOUT terraform:  + fixed_ip { 2025-09-27 20:47:40.405293 | orchestrator | 20:47:40.405 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-09-27 20:47:40.405297 | orchestrator | 20:47:40.405 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-27 20:47:40.405301 | orchestrator | 20:47:40.405 STDOUT terraform:  } 2025-09-27 20:47:40.405311 | orchestrator | 20:47:40.405 STDOUT terraform:  } 2025-09-27 20:47:40.405315 | orchestrator | 20:47:40.405 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-09-27 20:47:40.405319 | orchestrator | 20:47:40.405 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-09-27 20:47:40.405325 | orchestrator | 20:47:40.405 STDOUT terraform:  + force_destroy = false 2025-09-27 20:47:40.405329 | orchestrator | 20:47:40.405 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.405353 | orchestrator | 20:47:40.405 STDOUT terraform:  + port_id = (known after apply) 2025-09-27 20:47:40.405392 | orchestrator | 20:47:40.405 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.405433 | orchestrator | 20:47:40.405 STDOUT terraform:  + router_id = (known after apply) 2025-09-27 20:47:40.405472 | orchestrator | 20:47:40.405 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-27 20:47:40.405489 | orchestrator | 20:47:40.405 STDOUT terraform:  } 2025-09-27 20:47:40.405539 | orchestrator | 20:47:40.405 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-09-27 20:47:40.405587 | orchestrator | 20:47:40.405 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-09-27 20:47:40.405658 | orchestrator | 20:47:40.405 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-27 20:47:40.405710 | orchestrator | 20:47:40.405 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 20:47:40.405743 | orchestrator | 20:47:40.405 STDOUT terraform:  + availability_zone_hints = [ 2025-09-27 20:47:40.405759 | orchestrator | 20:47:40.405 STDOUT terraform:  + "nova", 2025-09-27 20:47:40.405778 | orchestrator | 20:47:40.405 STDOUT terraform:  ] 2025-09-27 20:47:40.405828 | orchestrator | 20:47:40.405 STDOUT terraform:  + distributed = (known after apply) 2025-09-27 20:47:40.405874 | orchestrator | 20:47:40.405 STDOUT terraform:  + enable_snat = (known after apply) 2025-09-27 20:47:40.405942 | orchestrator | 20:47:40.405 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-09-27 20:47:40.405989 | orchestrator | 20:47:40.405 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-09-27 20:47:40.406058 | orchestrator | 20:47:40.405 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.406097 | orchestrator | 20:47:40.406 STDOUT terraform:  + name = "testbed" 2025-09-27 20:47:40.406147 | orchestrator | 20:47:40.406 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.406202 | orchestrator | 20:47:40.406 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 20:47:40.406238 | orchestrator | 20:47:40.406 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-09-27 20:47:40.406257 | orchestrator | 20:47:40.406 STDOUT terraform:  } 2025-09-27 20:47:40.406358 | orchestrator | 20:47:40.406 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-09-27 20:47:40.406454 | orchestrator | 20:47:40.406 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-09-27 20:47:40.406482 | orchestrator | 20:47:40.406 STDOUT terraform:  + description = "ssh" 2025-09-27 20:47:40.412811 | orchestrator | 20:47:40.406 STDOUT terraform:  + direction = "ingress" 2025-09-27 20:47:40.412850 | orchestrator | 20:47:40.406 STDOUT terraform:  + ethertype = "IPv4" 2025-09-27 20:47:40.412856 | orchestrator | 20:47:40.406 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.412861 | orchestrator | 20:47:40.406 STDOUT terraform:  + port_range_max = 22 2025-09-27 20:47:40.412866 | orchestrator | 20:47:40.406 STDOUT terraform:  + port_range_min = 22 2025-09-27 20:47:40.412872 | orchestrator | 20:47:40.406 STDOUT terraform:  + protocol = "tcp" 2025-09-27 20:47:40.412877 | orchestrator | 20:47:40.406 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.412886 | orchestrator | 20:47:40.406 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-27 20:47:40.412891 | orchestrator | 20:47:40.406 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-27 20:47:40.412896 | orchestrator | 20:47:40.406 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-27 20:47:40.412901 | orchestrator | 20:47:40.406 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-27 20:47:40.412905 | orchestrator | 20:47:40.406 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 20:47:40.412910 | orchestrator | 20:47:40.406 STDOUT terraform:  } 2025-09-27 20:47:40.412915 | orchestrator | 20:47:40.406 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-09-27 20:47:40.412921 | orchestrator | 20:47:40.406 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-09-27 20:47:40.412925 | orchestrator | 20:47:40.407 STDOUT terraform:  + description = "wireguard" 2025-09-27 20:47:40.412930 | orchestrator | 20:47:40.407 STDOUT terraform:  + direction = "ingress" 2025-09-27 20:47:40.412935 | orchestrator | 20:47:40.407 STDOUT terraform:  + ethertype = "IPv4" 2025-09-27 20:47:40.412939 | orchestrator | 20:47:40.407 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.412944 | orchestrator | 20:47:40.407 STDOUT terraform:  + port_range_max = 51820 2025-09-27 20:47:40.412948 | orchestrator | 20:47:40.407 STDOUT terraform:  + port_range_min = 51820 2025-09-27 20:47:40.412953 | orchestrator | 20:47:40.407 STDOUT terraform:  + protocol = "udp" 2025-09-27 20:47:40.412964 | orchestrator | 20:47:40.407 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.412969 | orchestrator | 20:47:40.407 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-27 20:47:40.412973 | orchestrator | 20:47:40.407 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-27 20:47:40.412978 | orchestrator | 20:47:40.407 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-27 20:47:40.412983 | orchestrator | 20:47:40.407 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-27 20:47:40.412987 | orchestrator | 20:47:40.407 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 20:47:40.412992 | orchestrator | 20:47:40.407 STDOUT terraform:  } 2025-09-27 20:47:40.412996 | orchestrator | 20:47:40.407 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-09-27 20:47:40.413001 | orchestrator | 20:47:40.407 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-09-27 20:47:40.413005 | orchestrator | 20:47:40.407 STDOUT terraform:  + direction = "ingress" 2025-09-27 20:47:40.413010 | orchestrator | 20:47:40.407 STDOUT terraform:  + ethertype = "IPv4" 2025-09-27 20:47:40.413015 | orchestrator | 20:47:40.407 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.413019 | orchestrator | 20:47:40.407 STDOUT terraform:  + protocol = "tcp" 2025-09-27 20:47:40.413030 | orchestrator | 20:47:40.407 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.413034 | orchestrator | 20:47:40.407 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-27 20:47:40.413039 | orchestrator | 20:47:40.407 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-27 20:47:40.413043 | orchestrator | 20:47:40.407 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-27 20:47:40.413048 | orchestrator | 20:47:40.408 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-27 20:47:40.413053 | orchestrator | 20:47:40.408 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 20:47:40.413057 | orchestrator | 20:47:40.408 STDOUT terraform:  } 2025-09-27 20:47:40.413064 | orchestrator | 20:47:40.408 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-09-27 20:47:40.413069 | orchestrator | 20:47:40.408 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-09-27 20:47:40.413074 | orchestrator | 20:47:40.408 STDOUT terraform:  + direction = "ingress" 2025-09-27 20:47:40.413078 | orchestrator | 20:47:40.408 STDOUT terraform:  + ethertype = "IPv4" 2025-09-27 20:47:40.413083 | orchestrator | 20:47:40.408 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.413087 | orchestrator | 20:47:40.408 STDOUT terraform:  + protocol = "udp" 2025-09-27 20:47:40.413092 | orchestrator | 20:47:40.408 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.413096 | orchestrator | 20:47:40.408 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-27 20:47:40.413104 | orchestrator | 20:47:40.408 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-27 20:47:40.413109 | orchestrator | 20:47:40.408 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-27 20:47:40.413113 | orchestrator | 20:47:40.408 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-27 20:47:40.413118 | orchestrator | 20:47:40.408 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 20:47:40.413122 | orchestrator | 20:47:40.408 STDOUT terraform:  } 2025-09-27 20:47:40.413127 | orchestrator | 20:47:40.408 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-09-27 20:47:40.413131 | orchestrator | 20:47:40.408 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-09-27 20:47:40.413136 | orchestrator | 20:47:40.408 STDOUT terraform:  + direction = "ingress" 2025-09-27 20:47:40.413159 | orchestrator | 20:47:40.408 STDOUT terraform:  + ethertype = "IPv4" 2025-09-27 20:47:40.413164 | orchestrator | 20:47:40.408 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.413169 | orchestrator | 20:47:40.408 STDOUT terraform:  + protocol = "icmp" 2025-09-27 20:47:40.413173 | orchestrator | 20:47:40.408 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.413178 | orchestrator | 20:47:40.408 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-27 20:47:40.413182 | orchestrator | 20:47:40.408 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-27 20:47:40.413187 | orchestrator | 20:47:40.408 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-27 20:47:40.413191 | orchestrator | 20:47:40.408 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-27 20:47:40.413196 | orchestrator | 20:47:40.409 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 20:47:40.413201 | orchestrator | 20:47:40.409 STDOUT terraform:  } 2025-09-27 20:47:40.413205 | orchestrator | 20:47:40.409 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-09-27 20:47:40.413210 | orchestrator | 20:47:40.409 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-09-27 20:47:40.413215 | orchestrator | 20:47:40.409 STDOUT terraform:  + direction = "ingress" 2025-09-27 20:47:40.413225 | orchestrator | 20:47:40.409 STDOUT terraform:  + ethertype = "IPv4" 2025-09-27 20:47:40.413230 | orchestrator | 20:47:40.409 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.413234 | orchestrator | 20:47:40.409 STDOUT terraform:  + protocol = "tcp" 2025-09-27 20:47:40.413239 | orchestrator | 20:47:40.409 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.413244 | orchestrator | 20:47:40.409 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-27 20:47:40.413248 | orchestrator | 20:47:40.409 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-27 20:47:40.413253 | orchestrator | 20:47:40.409 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-27 20:47:40.413265 | orchestrator | 20:47:40.409 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-27 20:47:40.413269 | orchestrator | 20:47:40.409 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 20:47:40.413274 | orchestrator | 20:47:40.409 STDOUT terraform:  } 2025-09-27 20:47:40.413289 | orchestrator | 20:47:40.409 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-09-27 20:47:40.413294 | orchestrator | 20:47:40.409 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-09-27 20:47:40.413299 | orchestrator | 20:47:40.409 STDOUT terraform:  + direction = "ingress" 2025-09-27 20:47:40.413303 | orchestrator | 20:47:40.409 STDOUT terraform:  + ethertype = "IPv4" 2025-09-27 20:47:40.413308 | orchestrator | 20:47:40.409 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.413313 | orchestrator | 20:47:40.409 STDOUT terraform:  + protocol = "udp" 2025-09-27 20:47:40.413317 | orchestrator | 20:47:40.409 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.413322 | orchestrator | 20:47:40.409 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-27 20:47:40.413326 | orchestrator | 20:47:40.409 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-27 20:47:40.413331 | orchestrator | 20:47:40.409 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-27 20:47:40.413336 | orchestrator | 20:47:40.410 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-27 20:47:40.413340 | orchestrator | 20:47:40.410 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 20:47:40.413345 | orchestrator | 20:47:40.410 STDOUT terraform:  } 2025-09-27 20:47:40.413349 | orchestrator | 20:47:40.410 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-09-27 20:47:40.413354 | orchestrator | 20:47:40.410 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-09-27 20:47:40.413359 | orchestrator | 20:47:40.410 STDOUT terraform:  + direction = "ingress" 2025-09-27 20:47:40.413363 | orchestrator | 20:47:40.410 STDOUT terraform:  + ethertype = "IPv4" 2025-09-27 20:47:40.413368 | orchestrator | 20:47:40.410 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.413372 | orchestrator | 20:47:40.410 STDOUT terraform:  + protocol = "icmp" 2025-09-27 20:47:40.413377 | orchestrator | 20:47:40.410 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.413381 | orchestrator | 20:47:40.410 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-27 20:47:40.413386 | orchestrator | 20:47:40.410 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-27 20:47:40.413391 | orchestrator | 20:47:40.410 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-27 20:47:40.413395 | orchestrator | 20:47:40.410 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-27 20:47:40.413400 | orchestrator | 20:47:40.410 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 20:47:40.413404 | orchestrator | 20:47:40.410 STDOUT terraform:  } 2025-09-27 20:47:40.413415 | orchestrator | 20:47:40.410 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-09-27 20:47:40.413420 | orchestrator | 20:47:40.410 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-09-27 20:47:40.413425 | orchestrator | 20:47:40.410 STDOUT terraform:  + description = "vrrp" 2025-09-27 20:47:40.413430 | orchestrator | 20:47:40.410 STDOUT terraform:  + direction = "ingress" 2025-09-27 20:47:40.413434 | orchestrator | 20:47:40.410 STDOUT terraform:  + ethertype = "IPv4" 2025-09-27 20:47:40.413439 | orchestrator | 20:47:40.410 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.413443 | orchestrator | 20:47:40.410 STDOUT terraform:  + protocol = "112" 2025-09-27 20:47:40.413448 | orchestrator | 20:47:40.410 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.413453 | orchestrator | 20:47:40.410 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-27 20:47:40.413457 | orchestrator | 20:47:40.410 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-27 20:47:40.413462 | orchestrator | 20:47:40.411 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-27 20:47:40.413466 | orchestrator | 20:47:40.411 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-27 20:47:40.413471 | orchestrator | 20:47:40.411 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 20:47:40.413475 | orchestrator | 20:47:40.411 STDOUT terraform:  } 2025-09-27 20:47:40.413480 | orchestrator | 20:47:40.411 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-09-27 20:47:40.413485 | orchestrator | 20:47:40.411 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-09-27 20:47:40.413489 | orchestrator | 20:47:40.411 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 20:47:40.413494 | orchestrator | 20:47:40.411 STDOUT terraform:  + description = "management security group" 2025-09-27 20:47:40.413499 | orchestrator | 20:47:40.411 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.413503 | orchestrator | 20:47:40.411 STDOUT terraform:  + name = "testbed-management" 2025-09-27 20:47:40.413508 | orchestrator | 20:47:40.411 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.413512 | orchestrator | 20:47:40.411 STDOUT terraform:  + stateful = (known after apply) 2025-09-27 20:47:40.413547 | orchestrator | 20:47:40.411 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 20:47:40.413553 | orchestrator | 20:47:40.411 STDOUT terraform:  } 2025-09-27 20:47:40.413558 | orchestrator | 20:47:40.411 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-09-27 20:47:40.413562 | orchestrator | 20:47:40.411 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-09-27 20:47:40.413567 | orchestrator | 20:47:40.411 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 20:47:40.413571 | orchestrator | 20:47:40.411 STDOUT terraform:  + description = "node security group" 2025-09-27 20:47:40.413576 | orchestrator | 20:47:40.411 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.413584 | orchestrator | 20:47:40.411 STDOUT terraform:  + name = "testbed-node" 2025-09-27 20:47:40.413588 | orchestrator | 20:47:40.411 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.413593 | orchestrator | 20:47:40.411 STDOUT terraform:  + stateful = (known after apply) 2025-09-27 20:47:40.413597 | orchestrator | 20:47:40.411 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 20:47:40.413602 | orchestrator | 20:47:40.411 STDOUT terraform:  } 2025-09-27 20:47:40.413607 | orchestrator | 20:47:40.411 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-09-27 20:47:40.413611 | orchestrator | 20:47:40.411 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-09-27 20:47:40.413619 | orchestrator | 20:47:40.411 STDOUT terraform:  + all_tags = (known after apply) 2025-09-27 20:47:40.413624 | orchestrator | 20:47:40.411 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-09-27 20:47:40.413629 | orchestrator | 20:47:40.411 STDOUT terraform:  + dns_nameservers = [ 2025-09-27 20:47:40.413633 | orchestrator | 20:47:40.411 STDOUT terraform:  + "8.8.8.8", 2025-09-27 20:47:40.413638 | orchestrator | 20:47:40.411 STDOUT terraform:  + "9.9.9.9", 2025-09-27 20:47:40.413643 | orchestrator | 20:47:40.411 STDOUT terraform:  ] 2025-09-27 20:47:40.413647 | orchestrator | 20:47:40.411 STDOUT terraform:  + enable_dhcp = true 2025-09-27 20:47:40.413652 | orchestrator | 20:47:40.411 STDOUT terraform:  + gateway_ip = (known after apply) 2025-09-27 20:47:40.413658 | orchestrator | 20:47:40.412 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.413663 | orchestrator | 20:47:40.412 STDOUT terraform:  + ip_version = 4 2025-09-27 20:47:40.413668 | orchestrator | 20:47:40.412 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-09-27 20:47:40.413672 | orchestrator | 20:47:40.412 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-09-27 20:47:40.413677 | orchestrator | 20:47:40.412 STDOUT terraform:  + name = "subnet-testbed-management" 2025-09-27 20:47:40.413681 | orchestrator | 20:47:40.412 STDOUT terraform:  + network_id = (known after apply) 2025-09-27 20:47:40.413686 | orchestrator | 20:47:40.412 STDOUT terraform:  + no_gateway = false 2025-09-27 20:47:40.413691 | orchestrator | 20:47:40.412 STDOUT terraform:  + region = (known after apply) 2025-09-27 20:47:40.413695 | orchestrator | 20:47:40.412 STDOUT terraform:  + service_types = (known after apply) 2025-09-27 20:47:40.413700 | orchestrator | 20:47:40.412 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-27 20:47:40.413704 | orchestrator | 20:47:40.412 STDOUT terraform:  + allocation_pool { 2025-09-27 20:47:40.413709 | orchestrator | 20:47:40.412 STDOUT terraform:  + end = "192.168.31.250" 2025-09-27 20:47:40.413713 | orchestrator | 20:47:40.412 STDOUT terraform:  + start = "192.168.31.200" 2025-09-27 20:47:40.413718 | orchestrator | 20:47:40.412 STDOUT terraform:  } 2025-09-27 20:47:40.413723 | orchestrator | 20:47:40.412 STDOUT terraform:  } 2025-09-27 20:47:40.413727 | orchestrator | 20:47:40.412 STDOUT terraform:  # terraform_data.image will be created 2025-09-27 20:47:40.413734 | orchestrator | 20:47:40.412 STDOUT terraform:  + resource "terraform_data" "image" { 2025-09-27 20:47:40.413739 | orchestrator | 20:47:40.412 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.413743 | orchestrator | 20:47:40.412 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-27 20:47:40.413748 | orchestrator | 20:47:40.412 STDOUT terraform:  + output = (known after apply) 2025-09-27 20:47:40.413753 | orchestrator | 20:47:40.412 STDOUT terraform:  } 2025-09-27 20:47:40.413757 | orchestrator | 20:47:40.412 STDOUT terraform:  # terraform_data.image_node will be created 2025-09-27 20:47:40.413762 | orchestrator | 20:47:40.412 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-09-27 20:47:40.413766 | orchestrator | 20:47:40.412 STDOUT terraform:  + id = (known after apply) 2025-09-27 20:47:40.413771 | orchestrator | 20:47:40.412 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-27 20:47:40.413776 | orchestrator | 20:47:40.412 STDOUT terraform:  + output = (known after apply) 2025-09-27 20:47:40.413780 | orchestrator | 20:47:40.412 STDOUT terraform:  } 2025-09-27 20:47:40.413785 | orchestrator | 20:47:40.412 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-09-27 20:47:40.413789 | orchestrator | 20:47:40.412 STDOUT terraform: Changes to Outputs: 2025-09-27 20:47:40.413794 | orchestrator | 20:47:40.412 STDOUT terraform:  + manager_address = (sensitive value) 2025-09-27 20:47:40.413798 | orchestrator | 20:47:40.412 STDOUT terraform:  + private_key = (sensitive value) 2025-09-27 20:47:40.539523 | orchestrator | 20:47:40.538 STDOUT terraform: terraform_data.image_node: Creating... 2025-09-27 20:47:40.539583 | orchestrator | 20:47:40.538 STDOUT terraform: terraform_data.image: Creating... 2025-09-27 20:47:40.539593 | orchestrator | 20:47:40.538 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=63bd3daf-7869-c7dd-398b-dfab102e8630] 2025-09-27 20:47:40.539602 | orchestrator | 20:47:40.538 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=f94241f4-5c68-c1c0-de18-b02c19b59737] 2025-09-27 20:47:40.558786 | orchestrator | 20:47:40.558 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-09-27 20:47:40.559170 | orchestrator | 20:47:40.559 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-09-27 20:47:40.568176 | orchestrator | 20:47:40.567 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-09-27 20:47:40.568200 | orchestrator | 20:47:40.567 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-09-27 20:47:40.568210 | orchestrator | 20:47:40.568 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-09-27 20:47:40.568214 | orchestrator | 20:47:40.568 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-09-27 20:47:40.568218 | orchestrator | 20:47:40.568 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-09-27 20:47:40.568222 | orchestrator | 20:47:40.568 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-09-27 20:47:40.573356 | orchestrator | 20:47:40.572 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-09-27 20:47:40.574193 | orchestrator | 20:47:40.574 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-09-27 20:47:41.092229 | orchestrator | 20:47:41.092 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-27 20:47:41.096043 | orchestrator | 20:47:41.095 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-09-27 20:47:41.100768 | orchestrator | 20:47:41.100 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-27 20:47:41.106162 | orchestrator | 20:47:41.106 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-09-27 20:47:41.148382 | orchestrator | 20:47:41.148 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-09-27 20:47:41.158790 | orchestrator | 20:47:41.158 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-09-27 20:47:41.644154 | orchestrator | 20:47:41.643 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=e992d641-7639-450b-9b97-c673de7a398c] 2025-09-27 20:47:41.655499 | orchestrator | 20:47:41.654 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-09-27 20:47:44.248672 | orchestrator | 20:47:44.247 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=a92b9860-302a-4dfa-9a5b-f64375177990] 2025-09-27 20:47:44.248784 | orchestrator | 20:47:44.248 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=f7aa810c-750c-432b-b053-2bc489acb9c9] 2025-09-27 20:47:44.257245 | orchestrator | 20:47:44.256 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-09-27 20:47:44.259178 | orchestrator | 20:47:44.258 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-09-27 20:47:44.268622 | orchestrator | 20:47:44.268 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=13607e9c-06d4-4fec-b04d-15514859d6a0] 2025-09-27 20:47:44.271721 | orchestrator | 20:47:44.270 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=00c7ac73-0c66-4cdd-8f79-353d0386cdac] 2025-09-27 20:47:44.275529 | orchestrator | 20:47:44.275 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-09-27 20:47:44.278459 | orchestrator | 20:47:44.278 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-09-27 20:47:44.281240 | orchestrator | 20:47:44.281 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=57fc99d7-7aa7-4d8e-bac5-79cb8f64eb7c] 2025-09-27 20:47:44.287469 | orchestrator | 20:47:44.287 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-09-27 20:47:44.298536 | orchestrator | 20:47:44.298 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=3ec8be80-0eed-4819-876a-b80c0ef8150e] 2025-09-27 20:47:44.307378 | orchestrator | 20:47:44.307 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-09-27 20:47:44.327637 | orchestrator | 20:47:44.327 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=fb7d096e-2368-48a2-bece-3fcee17790fa] 2025-09-27 20:47:44.339864 | orchestrator | 20:47:44.339 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-09-27 20:47:44.344505 | orchestrator | 20:47:44.344 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=be526692dbc918048f84b13da12d3346be969f2a] 2025-09-27 20:47:44.357001 | orchestrator | 20:47:44.356 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-09-27 20:47:44.361493 | orchestrator | 20:47:44.361 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=1d27bfee-58fc-413a-aadf-ce708d3c762a] 2025-09-27 20:47:44.362111 | orchestrator | 20:47:44.361 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=9a786db9a4b0a9fdec080f250c88e05864e76285] 2025-09-27 20:47:44.368670 | orchestrator | 20:47:44.368 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-09-27 20:47:44.388370 | orchestrator | 20:47:44.388 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=89df2119-9fed-4bd7-9779-2bc26187d4ad] 2025-09-27 20:47:45.016134 | orchestrator | 20:47:45.015 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=a6b1147b-6bd3-47da-8a82-1ade68ae9e5b] 2025-09-27 20:47:45.308050 | orchestrator | 20:47:45.307 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=a3bdd81b-2973-445c-b72c-654490da6997] 2025-09-27 20:47:45.314312 | orchestrator | 20:47:45.314 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-09-27 20:47:47.666077 | orchestrator | 20:47:47.665 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=e37f5674-3a4c-4b27-8c5a-833e99a56bd6] 2025-09-27 20:47:47.689602 | orchestrator | 20:47:47.689 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=55c19aba-e9c5-4402-abdf-da1cbf841e84] 2025-09-27 20:47:47.711164 | orchestrator | 20:47:47.710 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=6b04e760-262c-4120-878c-1234782e5052] 2025-09-27 20:47:47.749703 | orchestrator | 20:47:47.749 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=7b1d39fe-a7ac-4042-839b-b249ebee98c5] 2025-09-27 20:47:47.750262 | orchestrator | 20:47:47.750 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=51e128d3-1914-4762-adc7-9d4270f02163] 2025-09-27 20:47:47.765740 | orchestrator | 20:47:47.759 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=96ed894f-b916-4151-acb7-f0197c26307f] 2025-09-27 20:47:48.066799 | orchestrator | 20:47:48.066 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 3s [id=5ad6bb95-8031-4690-a4a7-a180b4abc2d0] 2025-09-27 20:47:48.085492 | orchestrator | 20:47:48.083 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-09-27 20:47:48.088052 | orchestrator | 20:47:48.085 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-09-27 20:47:48.091639 | orchestrator | 20:47:48.090 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-09-27 20:47:48.339743 | orchestrator | 20:47:48.339 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=c37ca3d5-6ae6-42d8-9b18-f69209e2de6b] 2025-09-27 20:47:48.352807 | orchestrator | 20:47:48.352 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-09-27 20:47:48.353153 | orchestrator | 20:47:48.353 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-09-27 20:47:48.353537 | orchestrator | 20:47:48.353 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-09-27 20:47:48.354070 | orchestrator | 20:47:48.353 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-09-27 20:47:48.360694 | orchestrator | 20:47:48.360 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-09-27 20:47:48.366638 | orchestrator | 20:47:48.366 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-09-27 20:47:48.367524 | orchestrator | 20:47:48.367 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-09-27 20:47:48.368884 | orchestrator | 20:47:48.368 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-09-27 20:47:48.413659 | orchestrator | 20:47:48.413 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=53939ac3-ab3d-42e3-b81e-d569ecee5323] 2025-09-27 20:47:48.422376 | orchestrator | 20:47:48.422 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-09-27 20:47:48.716003 | orchestrator | 20:47:48.715 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=358d3d46-3750-45c2-9e2d-e432a93a844d] 2025-09-27 20:47:48.728260 | orchestrator | 20:47:48.728 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-09-27 20:47:48.961012 | orchestrator | 20:47:48.960 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=62055814-8f65-42cb-93ed-efbd3a49a08b] 2025-09-27 20:47:48.967410 | orchestrator | 20:47:48.967 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-09-27 20:47:49.032095 | orchestrator | 20:47:49.031 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=c9a1da41-c467-4574-90ee-69725f63a859] 2025-09-27 20:47:49.038219 | orchestrator | 20:47:49.038 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-09-27 20:47:49.147363 | orchestrator | 20:47:49.146 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=a9ad5a59-b7ec-420c-91d5-071715360c3f] 2025-09-27 20:47:49.160072 | orchestrator | 20:47:49.159 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-09-27 20:47:49.164968 | orchestrator | 20:47:49.164 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=9c0161a3-a7d0-4713-847c-f1dac38a37e2] 2025-09-27 20:47:49.169595 | orchestrator | 20:47:49.169 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-09-27 20:47:49.292602 | orchestrator | 20:47:49.292 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=e9c0f08e-9020-44b2-8dfa-2c59cc5db270] 2025-09-27 20:47:49.299093 | orchestrator | 20:47:49.298 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-09-27 20:47:49.467885 | orchestrator | 20:47:49.467 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=b61a1124-aa8c-4d45-888a-a57d808d937f] 2025-09-27 20:47:49.473141 | orchestrator | 20:47:49.472 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-09-27 20:47:49.715406 | orchestrator | 20:47:49.713 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=26bbbba3-53b6-41d6-b38a-f7c40ab68308] 2025-09-27 20:47:49.727221 | orchestrator | 20:47:49.726 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 2s [id=7b43e01b-6892-4a60-b169-7312e9835641] 2025-09-27 20:47:49.766692 | orchestrator | 20:47:49.766 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 2s [id=297f1a01-2d26-49b1-8b73-2efdd04a7cc1] 2025-09-27 20:47:50.147378 | orchestrator | 20:47:50.146 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 2s [id=edb3e73f-a101-4fdb-9a4b-055df060b2c2] 2025-09-27 20:47:50.353110 | orchestrator | 20:47:50.352 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=1093e115-f2c5-4bd9-9970-7ef287499019] 2025-09-27 20:47:50.547466 | orchestrator | 20:47:50.547 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 2s [id=410897d5-e185-4981-9d11-b150ffa4eefb] 2025-09-27 20:47:50.606520 | orchestrator | 20:47:50.606 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 2s [id=da5e3151-a9b9-4cc0-863b-948b7bfe5cf3] 2025-09-27 20:47:50.741081 | orchestrator | 20:47:50.740 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 2s [id=2bf5e214-b337-4699-abc0-767dc5ba8abf] 2025-09-27 20:47:51.481729 | orchestrator | 20:47:51.481 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=852e563d-971b-4f94-9486-30e4fa719338] 2025-09-27 20:47:51.510136 | orchestrator | 20:47:51.509 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-09-27 20:47:51.526211 | orchestrator | 20:47:51.526 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-09-27 20:47:51.531837 | orchestrator | 20:47:51.531 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-09-27 20:47:51.531918 | orchestrator | 20:47:51.531 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-09-27 20:47:51.532186 | orchestrator | 20:47:51.532 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-09-27 20:47:51.559761 | orchestrator | 20:47:51.559 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-09-27 20:47:51.563453 | orchestrator | 20:47:51.562 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-09-27 20:47:51.621567 | orchestrator | 20:47:51.621 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 3s [id=b57ff183-4d2d-4189-baef-99711e06200e] 2025-09-27 20:47:53.374416 | orchestrator | 20:47:53.370 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=78c054f6-bf52-44c7-b51f-a6cab7379a54] 2025-09-27 20:47:53.381481 | orchestrator | 20:47:53.381 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-09-27 20:47:53.394126 | orchestrator | 20:47:53.393 STDOUT terraform: local_file.inventory: Creating... 2025-09-27 20:47:53.394369 | orchestrator | 20:47:53.394 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-09-27 20:47:53.400732 | orchestrator | 20:47:53.400 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=ad87c11af9021180b5237e12c4f7ce8be692e736] 2025-09-27 20:47:53.401564 | orchestrator | 20:47:53.401 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=ca57c9023fc27b69ea487d2cf9ba83e2c36166b0] 2025-09-27 20:47:54.155878 | orchestrator | 20:47:54.155 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=78c054f6-bf52-44c7-b51f-a6cab7379a54] 2025-09-27 20:48:01.530273 | orchestrator | 20:48:01.528 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-09-27 20:48:01.538249 | orchestrator | 20:48:01.538 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-09-27 20:48:01.539455 | orchestrator | 20:48:01.539 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-09-27 20:48:01.539586 | orchestrator | 20:48:01.539 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-09-27 20:48:01.566761 | orchestrator | 20:48:01.566 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-09-27 20:48:01.566992 | orchestrator | 20:48:01.566 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-09-27 20:48:11.530151 | orchestrator | 20:48:11.529 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-09-27 20:48:11.539173 | orchestrator | 20:48:11.538 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-09-27 20:48:11.540212 | orchestrator | 20:48:11.540 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-09-27 20:48:11.540354 | orchestrator | 20:48:11.540 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-09-27 20:48:11.567715 | orchestrator | 20:48:11.567 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-09-27 20:48:11.567761 | orchestrator | 20:48:11.567 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-09-27 20:48:12.105912 | orchestrator | 20:48:12.105 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=36e2bb61-7f26-4875-b171-d1ba9789d0b5] 2025-09-27 20:48:12.146173 | orchestrator | 20:48:12.145 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 20s [id=ea974173-ceba-46b0-b501-1865ed19ff31] 2025-09-27 20:48:12.149711 | orchestrator | 20:48:12.149 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 20s [id=eea396c6-4b57-46a4-a287-92c3726879e7] 2025-09-27 20:48:12.151933 | orchestrator | 20:48:12.151 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 20s [id=70ed5825-f6fe-463e-bda7-669079996ffb] 2025-09-27 20:48:21.568496 | orchestrator | 20:48:21.567 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-09-27 20:48:21.568586 | orchestrator | 20:48:21.568 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-09-27 20:48:22.210799 | orchestrator | 20:48:22.210 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 30s [id=04b37841-ae5a-41ac-9f95-ac8224f7f8fc] 2025-09-27 20:48:22.622667 | orchestrator | 20:48:22.622 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=537f14a0-6101-4aa5-b7bb-536d5f73ed48] 2025-09-27 20:48:22.643152 | orchestrator | 20:48:22.643 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-09-27 20:48:22.648431 | orchestrator | 20:48:22.648 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=6160186610301839108] 2025-09-27 20:48:22.672429 | orchestrator | 20:48:22.672 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-09-27 20:48:22.682473 | orchestrator | 20:48:22.682 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-09-27 20:48:22.694313 | orchestrator | 20:48:22.694 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-09-27 20:48:22.697189 | orchestrator | 20:48:22.697 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-09-27 20:48:22.705216 | orchestrator | 20:48:22.705 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-09-27 20:48:22.711134 | orchestrator | 20:48:22.711 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-09-27 20:48:22.711683 | orchestrator | 20:48:22.711 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-09-27 20:48:22.712200 | orchestrator | 20:48:22.712 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-09-27 20:48:22.712669 | orchestrator | 20:48:22.712 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-09-27 20:48:22.719423 | orchestrator | 20:48:22.718 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-09-27 20:48:26.058286 | orchestrator | 20:48:26.057 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=537f14a0-6101-4aa5-b7bb-536d5f73ed48/f7aa810c-750c-432b-b053-2bc489acb9c9] 2025-09-27 20:48:26.083829 | orchestrator | 20:48:26.083 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=70ed5825-f6fe-463e-bda7-669079996ffb/fb7d096e-2368-48a2-bece-3fcee17790fa] 2025-09-27 20:48:26.085558 | orchestrator | 20:48:26.085 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=36e2bb61-7f26-4875-b171-d1ba9789d0b5/57fc99d7-7aa7-4d8e-bac5-79cb8f64eb7c] 2025-09-27 20:48:26.149757 | orchestrator | 20:48:26.148 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=70ed5825-f6fe-463e-bda7-669079996ffb/89df2119-9fed-4bd7-9779-2bc26187d4ad] 2025-09-27 20:48:26.158528 | orchestrator | 20:48:26.152 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=36e2bb61-7f26-4875-b171-d1ba9789d0b5/1d27bfee-58fc-413a-aadf-ce708d3c762a] 2025-09-27 20:48:26.158669 | orchestrator | 20:48:26.158 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=537f14a0-6101-4aa5-b7bb-536d5f73ed48/13607e9c-06d4-4fec-b04d-15514859d6a0] 2025-09-27 20:48:26.338180 | orchestrator | 20:48:26.337 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=537f14a0-6101-4aa5-b7bb-536d5f73ed48/00c7ac73-0c66-4cdd-8f79-353d0386cdac] 2025-09-27 20:48:32.250035 | orchestrator | 20:48:32.249 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 9s [id=70ed5825-f6fe-463e-bda7-669079996ffb/3ec8be80-0eed-4819-876a-b80c0ef8150e] 2025-09-27 20:48:32.261299 | orchestrator | 20:48:32.261 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 9s [id=36e2bb61-7f26-4875-b171-d1ba9789d0b5/a92b9860-302a-4dfa-9a5b-f64375177990] 2025-09-27 20:48:32.685179 | orchestrator | 20:48:32.684 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-09-27 20:48:42.685907 | orchestrator | 20:48:42.685 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-09-27 20:48:43.070394 | orchestrator | 20:48:43.070 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=119a6650-0fdf-4ae6-ae44-da14438068be] 2025-09-27 20:48:43.087145 | orchestrator | 20:48:43.086 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-09-27 20:48:43.087268 | orchestrator | 20:48:43.087 STDOUT terraform: Outputs: 2025-09-27 20:48:43.087343 | orchestrator | 20:48:43.087 STDOUT terraform: manager_address = 2025-09-27 20:48:43.087350 | orchestrator | 20:48:43.087 STDOUT terraform: private_key = 2025-09-27 20:48:43.435872 | orchestrator | ok: Runtime: 0:01:08.285544 2025-09-27 20:48:43.467592 | 2025-09-27 20:48:43.467732 | TASK [Create infrastructure (stable)] 2025-09-27 20:48:44.001672 | orchestrator | skipping: Conditional result was False 2025-09-27 20:48:44.010918 | 2025-09-27 20:48:44.011065 | TASK [Fetch manager address] 2025-09-27 20:48:44.441039 | orchestrator | ok 2025-09-27 20:48:44.450245 | 2025-09-27 20:48:44.450366 | TASK [Set manager_host address] 2025-09-27 20:48:44.529575 | orchestrator | ok 2025-09-27 20:48:44.538656 | 2025-09-27 20:48:44.539127 | LOOP [Update ansible collections] 2025-09-27 20:48:45.393844 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-27 20:48:45.394042 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-27 20:48:45.394077 | orchestrator | Starting galaxy collection install process 2025-09-27 20:48:45.394101 | orchestrator | Process install dependency map 2025-09-27 20:48:45.396729 | orchestrator | Starting collection install process 2025-09-27 20:48:45.396793 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons' 2025-09-27 20:48:45.396825 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons 2025-09-27 20:48:45.396852 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-09-27 20:48:45.396907 | orchestrator | ok: Item: commons Runtime: 0:00:00.555777 2025-09-27 20:48:46.250498 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-27 20:48:46.250590 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-27 20:48:46.250621 | orchestrator | Starting galaxy collection install process 2025-09-27 20:48:46.250654 | orchestrator | Process install dependency map 2025-09-27 20:48:46.250690 | orchestrator | Starting collection install process 2025-09-27 20:48:46.250720 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services' 2025-09-27 20:48:46.250742 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services 2025-09-27 20:48:46.250762 | orchestrator | osism.services:999.0.0 was installed successfully 2025-09-27 20:48:46.250792 | orchestrator | ok: Item: services Runtime: 0:00:00.607701 2025-09-27 20:48:46.262137 | 2025-09-27 20:48:46.262228 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-27 20:48:56.760177 | orchestrator | ok 2025-09-27 20:48:56.771058 | 2025-09-27 20:48:56.771184 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-27 20:49:56.816997 | orchestrator | ok 2025-09-27 20:49:56.826781 | 2025-09-27 20:49:56.826963 | TASK [Fetch manager ssh hostkey] 2025-09-27 20:49:58.397202 | orchestrator | Output suppressed because no_log was given 2025-09-27 20:49:58.413162 | 2025-09-27 20:49:58.413334 | TASK [Get ssh keypair from terraform environment] 2025-09-27 20:49:58.948029 | orchestrator | ok: Runtime: 0:00:00.010256 2025-09-27 20:49:58.963713 | 2025-09-27 20:49:58.963875 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-27 20:49:59.002503 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-09-27 20:49:59.012737 | 2025-09-27 20:49:59.012866 | TASK [Run manager part 0] 2025-09-27 20:49:59.844257 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-27 20:49:59.887563 | orchestrator | 2025-09-27 20:49:59.887612 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-09-27 20:49:59.887619 | orchestrator | 2025-09-27 20:49:59.887632 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-09-27 20:50:01.611337 | orchestrator | ok: [testbed-manager] 2025-09-27 20:50:01.611402 | orchestrator | 2025-09-27 20:50:01.611429 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-27 20:50:01.611441 | orchestrator | 2025-09-27 20:50:01.611453 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-27 20:50:03.526046 | orchestrator | ok: [testbed-manager] 2025-09-27 20:50:03.526085 | orchestrator | 2025-09-27 20:50:03.526091 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-27 20:50:04.134045 | orchestrator | ok: [testbed-manager] 2025-09-27 20:50:04.134091 | orchestrator | 2025-09-27 20:50:04.134100 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-27 20:50:04.180770 | orchestrator | skipping: [testbed-manager] 2025-09-27 20:50:04.180817 | orchestrator | 2025-09-27 20:50:04.180830 | orchestrator | TASK [Update package cache] **************************************************** 2025-09-27 20:50:04.210110 | orchestrator | skipping: [testbed-manager] 2025-09-27 20:50:04.210157 | orchestrator | 2025-09-27 20:50:04.210167 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-27 20:50:04.248325 | orchestrator | skipping: [testbed-manager] 2025-09-27 20:50:04.248375 | orchestrator | 2025-09-27 20:50:04.248387 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-27 20:50:04.278695 | orchestrator | skipping: [testbed-manager] 2025-09-27 20:50:04.278743 | orchestrator | 2025-09-27 20:50:04.278752 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-27 20:50:04.305784 | orchestrator | skipping: [testbed-manager] 2025-09-27 20:50:04.305824 | orchestrator | 2025-09-27 20:50:04.305832 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-09-27 20:50:04.346012 | orchestrator | skipping: [testbed-manager] 2025-09-27 20:50:04.346083 | orchestrator | 2025-09-27 20:50:04.346096 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-09-27 20:50:04.384118 | orchestrator | skipping: [testbed-manager] 2025-09-27 20:50:04.384164 | orchestrator | 2025-09-27 20:50:04.384174 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-09-27 20:50:05.080595 | orchestrator | changed: [testbed-manager] 2025-09-27 20:50:05.080653 | orchestrator | 2025-09-27 20:50:05.080662 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-09-27 20:52:33.222059 | orchestrator | changed: [testbed-manager] 2025-09-27 20:52:33.222166 | orchestrator | 2025-09-27 20:52:33.222185 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-27 20:53:49.072526 | orchestrator | changed: [testbed-manager] 2025-09-27 20:53:49.072631 | orchestrator | 2025-09-27 20:53:49.072647 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-27 20:54:10.885574 | orchestrator | changed: [testbed-manager] 2025-09-27 20:54:10.885822 | orchestrator | 2025-09-27 20:54:10.885845 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-27 20:54:19.085330 | orchestrator | changed: [testbed-manager] 2025-09-27 20:54:19.085424 | orchestrator | 2025-09-27 20:54:19.085441 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-27 20:54:19.132473 | orchestrator | ok: [testbed-manager] 2025-09-27 20:54:19.132550 | orchestrator | 2025-09-27 20:54:19.132576 | orchestrator | TASK [Get current user] ******************************************************** 2025-09-27 20:54:19.929802 | orchestrator | ok: [testbed-manager] 2025-09-27 20:54:19.929897 | orchestrator | 2025-09-27 20:54:19.929916 | orchestrator | TASK [Create venv directory] *************************************************** 2025-09-27 20:54:20.638959 | orchestrator | changed: [testbed-manager] 2025-09-27 20:54:20.639059 | orchestrator | 2025-09-27 20:54:20.639077 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-09-27 20:54:26.783932 | orchestrator | changed: [testbed-manager] 2025-09-27 20:54:26.784040 | orchestrator | 2025-09-27 20:54:26.784086 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-09-27 20:54:32.561655 | orchestrator | changed: [testbed-manager] 2025-09-27 20:54:32.562395 | orchestrator | 2025-09-27 20:54:32.562429 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-09-27 20:54:35.111670 | orchestrator | changed: [testbed-manager] 2025-09-27 20:54:35.111743 | orchestrator | 2025-09-27 20:54:35.111753 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-09-27 20:54:36.837293 | orchestrator | changed: [testbed-manager] 2025-09-27 20:54:36.837387 | orchestrator | 2025-09-27 20:54:36.837403 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-09-27 20:54:37.944430 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-27 20:54:37.944550 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-27 20:54:37.944565 | orchestrator | 2025-09-27 20:54:37.944578 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-09-27 20:54:37.986847 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-27 20:54:37.986899 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-27 20:54:37.986905 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-27 20:54:37.986910 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-27 20:54:41.230468 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-27 20:54:41.230547 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-27 20:54:41.230555 | orchestrator | 2025-09-27 20:54:41.230563 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-09-27 20:54:41.802385 | orchestrator | changed: [testbed-manager] 2025-09-27 20:54:41.802491 | orchestrator | 2025-09-27 20:54:41.802507 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-09-27 20:58:01.332943 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-09-27 20:58:01.333065 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-09-27 20:58:01.333083 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-09-27 20:58:01.333096 | orchestrator | 2025-09-27 20:58:01.333108 | orchestrator | TASK [Install local collections] *********************************************** 2025-09-27 20:58:03.427568 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-09-27 20:58:03.427647 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-09-27 20:58:03.427660 | orchestrator | 2025-09-27 20:58:03.427673 | orchestrator | PLAY [Create operator user] **************************************************** 2025-09-27 20:58:03.427684 | orchestrator | 2025-09-27 20:58:03.427695 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-27 20:58:04.756048 | orchestrator | ok: [testbed-manager] 2025-09-27 20:58:04.756111 | orchestrator | 2025-09-27 20:58:04.756119 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-27 20:58:04.798048 | orchestrator | ok: [testbed-manager] 2025-09-27 20:58:04.798115 | orchestrator | 2025-09-27 20:58:04.798126 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-27 20:58:04.859221 | orchestrator | ok: [testbed-manager] 2025-09-27 20:58:04.859269 | orchestrator | 2025-09-27 20:58:04.859279 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-27 20:58:05.635584 | orchestrator | changed: [testbed-manager] 2025-09-27 20:58:05.635683 | orchestrator | 2025-09-27 20:58:05.635699 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-27 20:58:06.352770 | orchestrator | changed: [testbed-manager] 2025-09-27 20:58:06.353585 | orchestrator | 2025-09-27 20:58:06.353604 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-27 20:58:07.688156 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-09-27 20:58:07.688206 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-09-27 20:58:07.688213 | orchestrator | 2025-09-27 20:58:07.688228 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-27 20:58:09.065702 | orchestrator | changed: [testbed-manager] 2025-09-27 20:58:09.065771 | orchestrator | 2025-09-27 20:58:09.065780 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-27 20:58:10.769150 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-09-27 20:58:10.769197 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-09-27 20:58:10.769204 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-09-27 20:58:10.769210 | orchestrator | 2025-09-27 20:58:10.769218 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-27 20:58:10.825708 | orchestrator | skipping: [testbed-manager] 2025-09-27 20:58:10.825758 | orchestrator | 2025-09-27 20:58:10.825766 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-27 20:58:11.368389 | orchestrator | changed: [testbed-manager] 2025-09-27 20:58:11.369150 | orchestrator | 2025-09-27 20:58:11.369180 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-27 20:58:11.435017 | orchestrator | skipping: [testbed-manager] 2025-09-27 20:58:11.435121 | orchestrator | 2025-09-27 20:58:11.435138 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-27 20:58:12.279796 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-27 20:58:12.279857 | orchestrator | changed: [testbed-manager] 2025-09-27 20:58:12.279866 | orchestrator | 2025-09-27 20:58:12.279872 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-27 20:58:12.317986 | orchestrator | skipping: [testbed-manager] 2025-09-27 20:58:12.318062 | orchestrator | 2025-09-27 20:58:12.318088 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-27 20:58:12.357794 | orchestrator | skipping: [testbed-manager] 2025-09-27 20:58:12.357842 | orchestrator | 2025-09-27 20:58:12.357851 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-27 20:58:12.388238 | orchestrator | skipping: [testbed-manager] 2025-09-27 20:58:12.388285 | orchestrator | 2025-09-27 20:58:12.388293 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-27 20:58:12.450516 | orchestrator | skipping: [testbed-manager] 2025-09-27 20:58:12.450575 | orchestrator | 2025-09-27 20:58:12.450587 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-27 20:58:13.146231 | orchestrator | ok: [testbed-manager] 2025-09-27 20:58:13.146325 | orchestrator | 2025-09-27 20:58:13.146341 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-27 20:58:13.146354 | orchestrator | 2025-09-27 20:58:13.146365 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-27 20:58:14.564426 | orchestrator | ok: [testbed-manager] 2025-09-27 20:58:14.564515 | orchestrator | 2025-09-27 20:58:14.564529 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-09-27 20:58:15.509167 | orchestrator | changed: [testbed-manager] 2025-09-27 20:58:15.509278 | orchestrator | 2025-09-27 20:58:15.509295 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 20:58:15.509309 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-09-27 20:58:15.509322 | orchestrator | 2025-09-27 20:58:15.834581 | orchestrator | ok: Runtime: 0:08:16.317826 2025-09-27 20:58:15.847911 | 2025-09-27 20:58:15.848036 | TASK [Point out that the log in on the manager is now possible] 2025-09-27 20:58:15.882652 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-09-27 20:58:15.892899 | 2025-09-27 20:58:15.893014 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-27 20:58:15.929349 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-09-27 20:58:15.940028 | 2025-09-27 20:58:15.940172 | TASK [Run manager part 1 + 2] 2025-09-27 20:58:16.807248 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-27 20:58:16.860291 | orchestrator | 2025-09-27 20:58:16.860394 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-09-27 20:58:16.860412 | orchestrator | 2025-09-27 20:58:16.860443 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-27 20:58:19.761235 | orchestrator | ok: [testbed-manager] 2025-09-27 20:58:19.761337 | orchestrator | 2025-09-27 20:58:19.761390 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-27 20:58:19.801850 | orchestrator | skipping: [testbed-manager] 2025-09-27 20:58:19.801926 | orchestrator | 2025-09-27 20:58:19.801945 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-27 20:58:19.842578 | orchestrator | ok: [testbed-manager] 2025-09-27 20:58:19.842654 | orchestrator | 2025-09-27 20:58:19.842670 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-27 20:58:19.884120 | orchestrator | ok: [testbed-manager] 2025-09-27 20:58:19.884189 | orchestrator | 2025-09-27 20:58:19.884208 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-27 20:58:19.960450 | orchestrator | ok: [testbed-manager] 2025-09-27 20:58:19.960527 | orchestrator | 2025-09-27 20:58:19.960546 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-27 20:58:20.021324 | orchestrator | ok: [testbed-manager] 2025-09-27 20:58:20.021387 | orchestrator | 2025-09-27 20:58:20.021402 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-27 20:58:20.064981 | orchestrator | included: /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-09-27 20:58:20.065054 | orchestrator | 2025-09-27 20:58:20.065068 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-27 20:58:20.693365 | orchestrator | ok: [testbed-manager] 2025-09-27 20:58:20.693439 | orchestrator | 2025-09-27 20:58:20.693456 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-27 20:58:20.746138 | orchestrator | skipping: [testbed-manager] 2025-09-27 20:58:20.746182 | orchestrator | 2025-09-27 20:58:20.746189 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-27 20:58:21.917855 | orchestrator | changed: [testbed-manager] 2025-09-27 20:58:21.917931 | orchestrator | 2025-09-27 20:58:21.917948 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-27 20:58:22.435334 | orchestrator | ok: [testbed-manager] 2025-09-27 20:58:22.435406 | orchestrator | 2025-09-27 20:58:22.435422 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-27 20:58:23.389188 | orchestrator | changed: [testbed-manager] 2025-09-27 20:58:23.389224 | orchestrator | 2025-09-27 20:58:23.389230 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-27 20:58:38.138095 | orchestrator | changed: [testbed-manager] 2025-09-27 20:58:38.138196 | orchestrator | 2025-09-27 20:58:38.138212 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-27 20:58:38.771041 | orchestrator | ok: [testbed-manager] 2025-09-27 20:58:38.771108 | orchestrator | 2025-09-27 20:58:38.771148 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-27 20:58:38.824783 | orchestrator | skipping: [testbed-manager] 2025-09-27 20:58:38.824842 | orchestrator | 2025-09-27 20:58:38.824858 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-09-27 20:58:39.697692 | orchestrator | changed: [testbed-manager] 2025-09-27 20:58:39.697727 | orchestrator | 2025-09-27 20:58:39.697734 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-09-27 20:58:40.584772 | orchestrator | changed: [testbed-manager] 2025-09-27 20:58:40.584833 | orchestrator | 2025-09-27 20:58:40.584847 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-09-27 20:58:41.109950 | orchestrator | changed: [testbed-manager] 2025-09-27 20:58:41.110010 | orchestrator | 2025-09-27 20:58:41.110086 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-09-27 20:58:41.146429 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-27 20:58:41.146476 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-27 20:58:41.146482 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-27 20:58:41.146487 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-27 20:58:42.853861 | orchestrator | changed: [testbed-manager] 2025-09-27 20:58:42.853942 | orchestrator | 2025-09-27 20:58:42.853955 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-09-27 20:58:51.348211 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-09-27 20:58:51.348320 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-09-27 20:58:51.348337 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-09-27 20:58:51.348350 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-09-27 20:58:51.348370 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-09-27 20:58:51.348382 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-09-27 20:58:51.348393 | orchestrator | 2025-09-27 20:58:51.348406 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-09-27 20:58:52.397575 | orchestrator | changed: [testbed-manager] 2025-09-27 20:58:52.397684 | orchestrator | 2025-09-27 20:58:52.397700 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-09-27 20:58:52.443431 | orchestrator | skipping: [testbed-manager] 2025-09-27 20:58:52.443474 | orchestrator | 2025-09-27 20:58:52.443481 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-09-27 20:58:55.505944 | orchestrator | changed: [testbed-manager] 2025-09-27 20:58:55.506076 | orchestrator | 2025-09-27 20:58:55.506096 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-09-27 20:58:55.547762 | orchestrator | skipping: [testbed-manager] 2025-09-27 20:58:55.547825 | orchestrator | 2025-09-27 20:58:55.547838 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-09-27 21:00:26.072161 | orchestrator | changed: [testbed-manager] 2025-09-27 21:00:26.072207 | orchestrator | 2025-09-27 21:00:26.072234 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-27 21:00:27.177526 | orchestrator | ok: [testbed-manager] 2025-09-27 21:00:27.177571 | orchestrator | 2025-09-27 21:00:27.177578 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:00:27.177585 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-09-27 21:00:27.177590 | orchestrator | 2025-09-27 21:00:27.634221 | orchestrator | ok: Runtime: 0:02:11.002575 2025-09-27 21:00:27.651080 | 2025-09-27 21:00:27.651221 | TASK [Reboot manager] 2025-09-27 21:00:29.186808 | orchestrator | ok: Runtime: 0:00:00.935117 2025-09-27 21:00:29.203438 | 2025-09-27 21:00:29.203590 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-27 21:00:42.657347 | orchestrator | ok 2025-09-27 21:00:42.668639 | 2025-09-27 21:00:42.668744 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-27 21:01:42.720428 | orchestrator | ok 2025-09-27 21:01:42.738303 | 2025-09-27 21:01:42.738518 | TASK [Deploy manager + bootstrap nodes] 2025-09-27 21:01:45.239366 | orchestrator | 2025-09-27 21:01:45.239587 | orchestrator | # DEPLOY MANAGER 2025-09-27 21:01:45.239623 | orchestrator | 2025-09-27 21:01:45.239639 | orchestrator | + set -e 2025-09-27 21:01:45.239653 | orchestrator | + echo 2025-09-27 21:01:45.239667 | orchestrator | + echo '# DEPLOY MANAGER' 2025-09-27 21:01:45.239685 | orchestrator | + echo 2025-09-27 21:01:45.239737 | orchestrator | + cat /opt/manager-vars.sh 2025-09-27 21:01:45.242551 | orchestrator | export NUMBER_OF_NODES=6 2025-09-27 21:01:45.242584 | orchestrator | 2025-09-27 21:01:45.242596 | orchestrator | export CEPH_VERSION=reef 2025-09-27 21:01:45.242609 | orchestrator | export CONFIGURATION_VERSION=main 2025-09-27 21:01:45.242621 | orchestrator | export MANAGER_VERSION=latest 2025-09-27 21:01:45.242644 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-09-27 21:01:45.242655 | orchestrator | 2025-09-27 21:01:45.242674 | orchestrator | export ARA=false 2025-09-27 21:01:45.242685 | orchestrator | export DEPLOY_MODE=manager 2025-09-27 21:01:45.242703 | orchestrator | export TEMPEST=false 2025-09-27 21:01:45.242747 | orchestrator | export IS_ZUUL=true 2025-09-27 21:01:45.242759 | orchestrator | 2025-09-27 21:01:45.242777 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.199 2025-09-27 21:01:45.242789 | orchestrator | export EXTERNAL_API=false 2025-09-27 21:01:45.242800 | orchestrator | 2025-09-27 21:01:45.242810 | orchestrator | export IMAGE_USER=ubuntu 2025-09-27 21:01:45.242825 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-09-27 21:01:45.242836 | orchestrator | 2025-09-27 21:01:45.242847 | orchestrator | export CEPH_STACK=ceph-ansible 2025-09-27 21:01:45.242864 | orchestrator | 2025-09-27 21:01:45.242876 | orchestrator | + echo 2025-09-27 21:01:45.242888 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-27 21:01:45.243955 | orchestrator | ++ export INTERACTIVE=false 2025-09-27 21:01:45.244019 | orchestrator | ++ INTERACTIVE=false 2025-09-27 21:01:45.244075 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-27 21:01:45.244099 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-27 21:01:45.244116 | orchestrator | + source /opt/manager-vars.sh 2025-09-27 21:01:45.244127 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-27 21:01:45.244138 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-27 21:01:45.244153 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-27 21:01:45.244246 | orchestrator | ++ CEPH_VERSION=reef 2025-09-27 21:01:45.244258 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-27 21:01:45.244270 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-27 21:01:45.244361 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-27 21:01:45.244378 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-27 21:01:45.244389 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-27 21:01:45.244433 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-27 21:01:45.244446 | orchestrator | ++ export ARA=false 2025-09-27 21:01:45.244457 | orchestrator | ++ ARA=false 2025-09-27 21:01:45.244468 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-27 21:01:45.244479 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-27 21:01:45.244489 | orchestrator | ++ export TEMPEST=false 2025-09-27 21:01:45.244500 | orchestrator | ++ TEMPEST=false 2025-09-27 21:01:45.244515 | orchestrator | ++ export IS_ZUUL=true 2025-09-27 21:01:45.244526 | orchestrator | ++ IS_ZUUL=true 2025-09-27 21:01:45.244537 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.199 2025-09-27 21:01:45.244548 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.199 2025-09-27 21:01:45.244559 | orchestrator | ++ export EXTERNAL_API=false 2025-09-27 21:01:45.244569 | orchestrator | ++ EXTERNAL_API=false 2025-09-27 21:01:45.244580 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-27 21:01:45.244591 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-27 21:01:45.244601 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-27 21:01:45.244612 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-27 21:01:45.244624 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-27 21:01:45.244634 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-27 21:01:45.244645 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-09-27 21:01:45.297598 | orchestrator | + docker version 2025-09-27 21:01:45.553156 | orchestrator | Client: Docker Engine - Community 2025-09-27 21:01:45.553248 | orchestrator | Version: 27.5.1 2025-09-27 21:01:45.553264 | orchestrator | API version: 1.47 2025-09-27 21:01:45.553292 | orchestrator | Go version: go1.22.11 2025-09-27 21:01:45.553303 | orchestrator | Git commit: 9f9e405 2025-09-27 21:01:45.553324 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-27 21:01:45.553336 | orchestrator | OS/Arch: linux/amd64 2025-09-27 21:01:45.553346 | orchestrator | Context: default 2025-09-27 21:01:45.553366 | orchestrator | 2025-09-27 21:01:45.553376 | orchestrator | Server: Docker Engine - Community 2025-09-27 21:01:45.553386 | orchestrator | Engine: 2025-09-27 21:01:45.553397 | orchestrator | Version: 27.5.1 2025-09-27 21:01:45.553407 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-09-27 21:01:45.553442 | orchestrator | Go version: go1.22.11 2025-09-27 21:01:45.553452 | orchestrator | Git commit: 4c9b3b0 2025-09-27 21:01:45.553462 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-27 21:01:45.553484 | orchestrator | OS/Arch: linux/amd64 2025-09-27 21:01:45.553506 | orchestrator | Experimental: false 2025-09-27 21:01:45.553516 | orchestrator | containerd: 2025-09-27 21:01:45.553526 | orchestrator | Version: v1.7.28 2025-09-27 21:01:45.553536 | orchestrator | GitCommit: b98a3aace656320842a23f4a392a33f46af97866 2025-09-27 21:01:45.553546 | orchestrator | runc: 2025-09-27 21:01:45.553556 | orchestrator | Version: 1.3.0 2025-09-27 21:01:45.553565 | orchestrator | GitCommit: v1.3.0-0-g4ca628d1 2025-09-27 21:01:45.553574 | orchestrator | docker-init: 2025-09-27 21:01:45.553584 | orchestrator | Version: 0.19.0 2025-09-27 21:01:45.553594 | orchestrator | GitCommit: de40ad0 2025-09-27 21:01:45.556716 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-09-27 21:01:45.565749 | orchestrator | + set -e 2025-09-27 21:01:45.565798 | orchestrator | + source /opt/manager-vars.sh 2025-09-27 21:01:45.565809 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-27 21:01:45.565818 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-27 21:01:45.565828 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-27 21:01:45.565837 | orchestrator | ++ CEPH_VERSION=reef 2025-09-27 21:01:45.565847 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-27 21:01:45.565857 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-27 21:01:45.565867 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-27 21:01:45.565876 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-27 21:01:45.565885 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-27 21:01:45.565895 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-27 21:01:45.565905 | orchestrator | ++ export ARA=false 2025-09-27 21:01:45.565914 | orchestrator | ++ ARA=false 2025-09-27 21:01:45.565924 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-27 21:01:45.565933 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-27 21:01:45.565942 | orchestrator | ++ export TEMPEST=false 2025-09-27 21:01:45.565952 | orchestrator | ++ TEMPEST=false 2025-09-27 21:01:45.565968 | orchestrator | ++ export IS_ZUUL=true 2025-09-27 21:01:45.565978 | orchestrator | ++ IS_ZUUL=true 2025-09-27 21:01:45.565988 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.199 2025-09-27 21:01:45.565997 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.199 2025-09-27 21:01:45.566007 | orchestrator | ++ export EXTERNAL_API=false 2025-09-27 21:01:45.566083 | orchestrator | ++ EXTERNAL_API=false 2025-09-27 21:01:45.566095 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-27 21:01:45.566104 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-27 21:01:45.566124 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-27 21:01:45.566133 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-27 21:01:45.566143 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-27 21:01:45.566152 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-27 21:01:45.566162 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-27 21:01:45.566171 | orchestrator | ++ export INTERACTIVE=false 2025-09-27 21:01:45.566180 | orchestrator | ++ INTERACTIVE=false 2025-09-27 21:01:45.566189 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-27 21:01:45.566204 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-27 21:01:45.566217 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-27 21:01:45.566227 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-27 21:01:45.566236 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-09-27 21:01:45.573794 | orchestrator | + set -e 2025-09-27 21:01:45.574178 | orchestrator | + VERSION=reef 2025-09-27 21:01:45.574768 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-27 21:01:45.580532 | orchestrator | + [[ -n ceph_version: reef ]] 2025-09-27 21:01:45.580558 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-09-27 21:01:45.586213 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-09-27 21:01:45.593042 | orchestrator | + set -e 2025-09-27 21:01:45.593532 | orchestrator | + VERSION=2024.2 2025-09-27 21:01:45.593963 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-27 21:01:45.597462 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-09-27 21:01:45.597488 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-09-27 21:01:45.602421 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-09-27 21:01:45.602794 | orchestrator | ++ semver latest 7.0.0 2025-09-27 21:01:45.663849 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-27 21:01:45.663930 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-27 21:01:45.663946 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-09-27 21:01:45.663959 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-09-27 21:01:45.752349 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-27 21:01:45.758134 | orchestrator | + source /opt/venv/bin/activate 2025-09-27 21:01:45.759237 | orchestrator | ++ deactivate nondestructive 2025-09-27 21:01:45.759292 | orchestrator | ++ '[' -n '' ']' 2025-09-27 21:01:45.759304 | orchestrator | ++ '[' -n '' ']' 2025-09-27 21:01:45.759316 | orchestrator | ++ hash -r 2025-09-27 21:01:45.759344 | orchestrator | ++ '[' -n '' ']' 2025-09-27 21:01:45.759360 | orchestrator | ++ unset VIRTUAL_ENV 2025-09-27 21:01:45.759372 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-09-27 21:01:45.759383 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-09-27 21:01:45.759418 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-09-27 21:01:45.759433 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-09-27 21:01:45.759447 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-09-27 21:01:45.759469 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-09-27 21:01:45.759484 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-27 21:01:45.759500 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-27 21:01:45.759511 | orchestrator | ++ export PATH 2025-09-27 21:01:45.759716 | orchestrator | ++ '[' -n '' ']' 2025-09-27 21:01:45.759776 | orchestrator | ++ '[' -z '' ']' 2025-09-27 21:01:45.759790 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-09-27 21:01:45.759800 | orchestrator | ++ PS1='(venv) ' 2025-09-27 21:01:45.759832 | orchestrator | ++ export PS1 2025-09-27 21:01:45.759845 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-09-27 21:01:45.759856 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-09-27 21:01:45.759871 | orchestrator | ++ hash -r 2025-09-27 21:01:45.759906 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-09-27 21:01:46.907436 | orchestrator | 2025-09-27 21:01:46.907565 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-09-27 21:01:46.907582 | orchestrator | 2025-09-27 21:01:46.907594 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-27 21:01:47.435403 | orchestrator | ok: [testbed-manager] 2025-09-27 21:01:47.435534 | orchestrator | 2025-09-27 21:01:47.435550 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-27 21:01:48.362118 | orchestrator | changed: [testbed-manager] 2025-09-27 21:01:48.362218 | orchestrator | 2025-09-27 21:01:48.362231 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-09-27 21:01:48.362241 | orchestrator | 2025-09-27 21:01:48.362250 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-27 21:01:50.533176 | orchestrator | ok: [testbed-manager] 2025-09-27 21:01:50.533357 | orchestrator | 2025-09-27 21:01:50.533390 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-09-27 21:01:50.576489 | orchestrator | ok: [testbed-manager] 2025-09-27 21:01:50.576524 | orchestrator | 2025-09-27 21:01:50.576540 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-09-27 21:01:51.007000 | orchestrator | changed: [testbed-manager] 2025-09-27 21:01:51.007091 | orchestrator | 2025-09-27 21:01:51.007104 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-09-27 21:01:51.039601 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:01:51.039629 | orchestrator | 2025-09-27 21:01:51.039641 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-27 21:01:51.374171 | orchestrator | changed: [testbed-manager] 2025-09-27 21:01:51.374302 | orchestrator | 2025-09-27 21:01:51.374318 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-09-27 21:01:51.427253 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:01:51.427301 | orchestrator | 2025-09-27 21:01:51.427313 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-09-27 21:01:51.751590 | orchestrator | ok: [testbed-manager] 2025-09-27 21:01:51.751710 | orchestrator | 2025-09-27 21:01:51.751727 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-09-27 21:01:51.863834 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:01:51.863943 | orchestrator | 2025-09-27 21:01:51.863958 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-09-27 21:01:51.863970 | orchestrator | 2025-09-27 21:01:51.863986 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-27 21:01:54.502275 | orchestrator | ok: [testbed-manager] 2025-09-27 21:01:54.502395 | orchestrator | 2025-09-27 21:01:54.502410 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-09-27 21:01:54.599111 | orchestrator | included: osism.services.traefik for testbed-manager 2025-09-27 21:01:54.599206 | orchestrator | 2025-09-27 21:01:54.599221 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-09-27 21:01:54.644820 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-09-27 21:01:54.644909 | orchestrator | 2025-09-27 21:01:54.644925 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-09-27 21:01:55.688654 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-09-27 21:01:55.688758 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-09-27 21:01:55.688772 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-09-27 21:01:55.688784 | orchestrator | 2025-09-27 21:01:55.688796 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-09-27 21:01:57.415074 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-09-27 21:01:57.415185 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-09-27 21:01:57.415201 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-09-27 21:01:57.415213 | orchestrator | 2025-09-27 21:01:57.415224 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-09-27 21:01:58.033623 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-27 21:01:58.033728 | orchestrator | changed: [testbed-manager] 2025-09-27 21:01:58.033744 | orchestrator | 2025-09-27 21:01:58.033757 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-09-27 21:01:58.631028 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-27 21:01:58.631122 | orchestrator | changed: [testbed-manager] 2025-09-27 21:01:58.631136 | orchestrator | 2025-09-27 21:01:58.631149 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-09-27 21:01:58.684630 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:01:58.684684 | orchestrator | 2025-09-27 21:01:58.684699 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-09-27 21:01:59.041892 | orchestrator | ok: [testbed-manager] 2025-09-27 21:01:59.041996 | orchestrator | 2025-09-27 21:01:59.042012 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-09-27 21:01:59.116844 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-09-27 21:01:59.116922 | orchestrator | 2025-09-27 21:01:59.116937 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-09-27 21:02:00.140048 | orchestrator | changed: [testbed-manager] 2025-09-27 21:02:00.140168 | orchestrator | 2025-09-27 21:02:00.140184 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-09-27 21:02:00.912953 | orchestrator | changed: [testbed-manager] 2025-09-27 21:02:00.913066 | orchestrator | 2025-09-27 21:02:00.913083 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-09-27 21:02:12.364875 | orchestrator | changed: [testbed-manager] 2025-09-27 21:02:12.365017 | orchestrator | 2025-09-27 21:02:12.365035 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-09-27 21:02:12.411724 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:02:12.411842 | orchestrator | 2025-09-27 21:02:12.411861 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-09-27 21:02:12.411874 | orchestrator | 2025-09-27 21:02:12.411886 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-27 21:02:15.098876 | orchestrator | ok: [testbed-manager] 2025-09-27 21:02:15.098987 | orchestrator | 2025-09-27 21:02:15.099034 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-09-27 21:02:15.224513 | orchestrator | included: osism.services.manager for testbed-manager 2025-09-27 21:02:15.224606 | orchestrator | 2025-09-27 21:02:15.224619 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-09-27 21:02:15.281081 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-09-27 21:02:15.281154 | orchestrator | 2025-09-27 21:02:15.281165 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-09-27 21:02:17.699500 | orchestrator | ok: [testbed-manager] 2025-09-27 21:02:17.699609 | orchestrator | 2025-09-27 21:02:17.699624 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-09-27 21:02:17.754501 | orchestrator | ok: [testbed-manager] 2025-09-27 21:02:17.754563 | orchestrator | 2025-09-27 21:02:17.754574 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-09-27 21:02:17.874346 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-09-27 21:02:17.874444 | orchestrator | 2025-09-27 21:02:17.874459 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-09-27 21:02:20.676169 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-09-27 21:02:20.676357 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-09-27 21:02:20.676380 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-09-27 21:02:20.676392 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-09-27 21:02:20.676403 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-09-27 21:02:20.676414 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-09-27 21:02:20.676425 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-09-27 21:02:20.676436 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-09-27 21:02:20.676447 | orchestrator | 2025-09-27 21:02:20.676459 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-09-27 21:02:21.298883 | orchestrator | changed: [testbed-manager] 2025-09-27 21:02:21.298995 | orchestrator | 2025-09-27 21:02:21.299011 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-09-27 21:02:21.931415 | orchestrator | changed: [testbed-manager] 2025-09-27 21:02:21.931515 | orchestrator | 2025-09-27 21:02:21.931531 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-09-27 21:02:22.007891 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-09-27 21:02:22.007944 | orchestrator | 2025-09-27 21:02:22.007960 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-09-27 21:02:23.177779 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-09-27 21:02:23.178679 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-09-27 21:02:23.178715 | orchestrator | 2025-09-27 21:02:23.178729 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-09-27 21:02:23.804928 | orchestrator | changed: [testbed-manager] 2025-09-27 21:02:23.805029 | orchestrator | 2025-09-27 21:02:23.805044 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-09-27 21:02:23.852191 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:02:23.852239 | orchestrator | 2025-09-27 21:02:23.852339 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-09-27 21:02:23.932344 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2025-09-27 21:02:23.932433 | orchestrator | 2025-09-27 21:02:23.932448 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2025-09-27 21:02:24.546642 | orchestrator | changed: [testbed-manager] 2025-09-27 21:02:24.546749 | orchestrator | 2025-09-27 21:02:24.546764 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-09-27 21:02:24.604123 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-09-27 21:02:24.604218 | orchestrator | 2025-09-27 21:02:24.604233 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-09-27 21:02:25.924093 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-27 21:02:25.924199 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-27 21:02:25.924213 | orchestrator | changed: [testbed-manager] 2025-09-27 21:02:25.924226 | orchestrator | 2025-09-27 21:02:25.924238 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-09-27 21:02:26.550401 | orchestrator | changed: [testbed-manager] 2025-09-27 21:02:26.550522 | orchestrator | 2025-09-27 21:02:26.550538 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-09-27 21:02:26.612250 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:02:26.612352 | orchestrator | 2025-09-27 21:02:26.612366 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-09-27 21:02:26.692567 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-09-27 21:02:26.692642 | orchestrator | 2025-09-27 21:02:26.692656 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-09-27 21:02:27.201917 | orchestrator | changed: [testbed-manager] 2025-09-27 21:02:27.202081 | orchestrator | 2025-09-27 21:02:27.202098 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-09-27 21:02:27.582885 | orchestrator | changed: [testbed-manager] 2025-09-27 21:02:27.582999 | orchestrator | 2025-09-27 21:02:27.583015 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-09-27 21:02:28.783095 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-09-27 21:02:28.783210 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-09-27 21:02:28.783225 | orchestrator | 2025-09-27 21:02:28.783238 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-09-27 21:02:29.421366 | orchestrator | changed: [testbed-manager] 2025-09-27 21:02:29.421469 | orchestrator | 2025-09-27 21:02:29.421485 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-09-27 21:02:29.808090 | orchestrator | ok: [testbed-manager] 2025-09-27 21:02:29.808189 | orchestrator | 2025-09-27 21:02:29.808203 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-09-27 21:02:30.153500 | orchestrator | changed: [testbed-manager] 2025-09-27 21:02:30.153602 | orchestrator | 2025-09-27 21:02:30.153617 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-09-27 21:02:30.188181 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:02:30.188239 | orchestrator | 2025-09-27 21:02:30.188254 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-09-27 21:02:30.248880 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-09-27 21:02:30.248927 | orchestrator | 2025-09-27 21:02:30.248939 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-09-27 21:02:30.278885 | orchestrator | ok: [testbed-manager] 2025-09-27 21:02:30.278923 | orchestrator | 2025-09-27 21:02:30.278934 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-09-27 21:02:32.260533 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-09-27 21:02:32.260641 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-09-27 21:02:32.260656 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-09-27 21:02:32.260668 | orchestrator | 2025-09-27 21:02:32.260680 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-09-27 21:02:32.954386 | orchestrator | changed: [testbed-manager] 2025-09-27 21:02:32.954495 | orchestrator | 2025-09-27 21:02:32.954513 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-09-27 21:02:33.655011 | orchestrator | changed: [testbed-manager] 2025-09-27 21:02:33.655901 | orchestrator | 2025-09-27 21:02:33.655934 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-09-27 21:02:34.334100 | orchestrator | changed: [testbed-manager] 2025-09-27 21:02:34.334205 | orchestrator | 2025-09-27 21:02:34.334220 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-09-27 21:02:34.406921 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-09-27 21:02:34.407015 | orchestrator | 2025-09-27 21:02:34.407028 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-09-27 21:02:34.449979 | orchestrator | ok: [testbed-manager] 2025-09-27 21:02:34.450151 | orchestrator | 2025-09-27 21:02:34.450168 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-09-27 21:02:35.171881 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-09-27 21:02:35.171983 | orchestrator | 2025-09-27 21:02:35.171997 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-09-27 21:02:35.262665 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-09-27 21:02:35.262746 | orchestrator | 2025-09-27 21:02:35.262766 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-09-27 21:02:35.957013 | orchestrator | changed: [testbed-manager] 2025-09-27 21:02:35.957115 | orchestrator | 2025-09-27 21:02:35.957129 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-09-27 21:02:36.524659 | orchestrator | ok: [testbed-manager] 2025-09-27 21:02:36.524785 | orchestrator | 2025-09-27 21:02:36.524819 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-09-27 21:02:36.571404 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:02:36.571470 | orchestrator | 2025-09-27 21:02:36.571483 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-09-27 21:02:36.626189 | orchestrator | ok: [testbed-manager] 2025-09-27 21:02:36.626283 | orchestrator | 2025-09-27 21:02:36.626363 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-09-27 21:02:37.449871 | orchestrator | changed: [testbed-manager] 2025-09-27 21:02:37.449971 | orchestrator | 2025-09-27 21:02:37.449987 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-09-27 21:03:43.045547 | orchestrator | changed: [testbed-manager] 2025-09-27 21:03:43.045684 | orchestrator | 2025-09-27 21:03:43.045702 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-09-27 21:03:44.065822 | orchestrator | ok: [testbed-manager] 2025-09-27 21:03:44.065937 | orchestrator | 2025-09-27 21:03:44.065951 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-09-27 21:03:44.160004 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:03:44.160106 | orchestrator | 2025-09-27 21:03:44.160121 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-09-27 21:03:46.546275 | orchestrator | changed: [testbed-manager] 2025-09-27 21:03:46.547275 | orchestrator | 2025-09-27 21:03:46.547311 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-09-27 21:03:46.612853 | orchestrator | ok: [testbed-manager] 2025-09-27 21:03:46.612963 | orchestrator | 2025-09-27 21:03:46.612977 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-27 21:03:46.612990 | orchestrator | 2025-09-27 21:03:46.613001 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-09-27 21:03:46.663660 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:03:46.663693 | orchestrator | 2025-09-27 21:03:46.663704 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-09-27 21:04:46.724045 | orchestrator | Pausing for 60 seconds 2025-09-27 21:04:46.724182 | orchestrator | changed: [testbed-manager] 2025-09-27 21:04:46.724198 | orchestrator | 2025-09-27 21:04:46.724211 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-09-27 21:04:51.720885 | orchestrator | changed: [testbed-manager] 2025-09-27 21:04:51.721018 | orchestrator | 2025-09-27 21:04:51.721036 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-09-27 21:05:33.289388 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-09-27 21:05:33.289535 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-09-27 21:05:33.289543 | orchestrator | changed: [testbed-manager] 2025-09-27 21:05:33.289574 | orchestrator | 2025-09-27 21:05:33.289579 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-09-27 21:05:42.842284 | orchestrator | changed: [testbed-manager] 2025-09-27 21:05:42.842475 | orchestrator | 2025-09-27 21:05:42.842502 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-09-27 21:05:42.918732 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-09-27 21:05:42.918836 | orchestrator | 2025-09-27 21:05:42.918850 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-27 21:05:42.918863 | orchestrator | 2025-09-27 21:05:42.918874 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-09-27 21:05:42.971808 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:05:42.971921 | orchestrator | 2025-09-27 21:05:42.971937 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2025-09-27 21:05:43.064181 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2025-09-27 21:05:43.064298 | orchestrator | 2025-09-27 21:05:43.064313 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2025-09-27 21:05:43.830475 | orchestrator | changed: [testbed-manager] 2025-09-27 21:05:43.830560 | orchestrator | 2025-09-27 21:05:43.830574 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2025-09-27 21:05:47.467861 | orchestrator | ok: [testbed-manager] 2025-09-27 21:05:47.468000 | orchestrator | 2025-09-27 21:05:47.468018 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2025-09-27 21:05:47.534133 | orchestrator | ok: [testbed-manager] => { 2025-09-27 21:05:47.534294 | orchestrator | "version_check_result.stdout_lines": [ 2025-09-27 21:05:47.534321 | orchestrator | "=== OSISM Container Version Check ===", 2025-09-27 21:05:47.534341 | orchestrator | "Checking running containers against expected versions...", 2025-09-27 21:05:47.534361 | orchestrator | "", 2025-09-27 21:05:47.534380 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2025-09-27 21:05:47.534400 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2025-09-27 21:05:47.534500 | orchestrator | " Enabled: true", 2025-09-27 21:05:47.534523 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2025-09-27 21:05:47.534544 | orchestrator | " Status: ✅ MATCH", 2025-09-27 21:05:47.534560 | orchestrator | "", 2025-09-27 21:05:47.534572 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2025-09-27 21:05:47.534583 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2025-09-27 21:05:47.534594 | orchestrator | " Enabled: true", 2025-09-27 21:05:47.534605 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2025-09-27 21:05:47.534616 | orchestrator | " Status: ✅ MATCH", 2025-09-27 21:05:47.534627 | orchestrator | "", 2025-09-27 21:05:47.534638 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2025-09-27 21:05:47.534649 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2025-09-27 21:05:47.534659 | orchestrator | " Enabled: true", 2025-09-27 21:05:47.534670 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2025-09-27 21:05:47.534681 | orchestrator | " Status: ✅ MATCH", 2025-09-27 21:05:47.534692 | orchestrator | "", 2025-09-27 21:05:47.534703 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2025-09-27 21:05:47.534713 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2025-09-27 21:05:47.534724 | orchestrator | " Enabled: true", 2025-09-27 21:05:47.534736 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2025-09-27 21:05:47.534747 | orchestrator | " Status: ✅ MATCH", 2025-09-27 21:05:47.534757 | orchestrator | "", 2025-09-27 21:05:47.534768 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2025-09-27 21:05:47.534779 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2025-09-27 21:05:47.534817 | orchestrator | " Enabled: true", 2025-09-27 21:05:47.534829 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2025-09-27 21:05:47.534839 | orchestrator | " Status: ✅ MATCH", 2025-09-27 21:05:47.534850 | orchestrator | "", 2025-09-27 21:05:47.534861 | orchestrator | "Checking service: osismclient (OSISM Client)", 2025-09-27 21:05:47.534871 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-09-27 21:05:47.534882 | orchestrator | " Enabled: true", 2025-09-27 21:05:47.534893 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-09-27 21:05:47.534904 | orchestrator | " Status: ✅ MATCH", 2025-09-27 21:05:47.534914 | orchestrator | "", 2025-09-27 21:05:47.534925 | orchestrator | "Checking service: ara-server (ARA Server)", 2025-09-27 21:05:47.534937 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2025-09-27 21:05:47.534956 | orchestrator | " Enabled: true", 2025-09-27 21:05:47.534972 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2025-09-27 21:05:47.534989 | orchestrator | " Status: ✅ MATCH", 2025-09-27 21:05:47.535009 | orchestrator | "", 2025-09-27 21:05:47.535029 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2025-09-27 21:05:47.535058 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.3", 2025-09-27 21:05:47.535071 | orchestrator | " Enabled: true", 2025-09-27 21:05:47.535082 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.3", 2025-09-27 21:05:47.535092 | orchestrator | " Status: ✅ MATCH", 2025-09-27 21:05:47.535103 | orchestrator | "", 2025-09-27 21:05:47.535113 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2025-09-27 21:05:47.535124 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2025-09-27 21:05:47.535135 | orchestrator | " Enabled: true", 2025-09-27 21:05:47.535151 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2025-09-27 21:05:47.535162 | orchestrator | " Status: ✅ MATCH", 2025-09-27 21:05:47.535173 | orchestrator | "", 2025-09-27 21:05:47.535183 | orchestrator | "Checking service: redis (Redis Cache)", 2025-09-27 21:05:47.535194 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.5-alpine", 2025-09-27 21:05:47.535205 | orchestrator | " Enabled: true", 2025-09-27 21:05:47.535215 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.5-alpine", 2025-09-27 21:05:47.535226 | orchestrator | " Status: ✅ MATCH", 2025-09-27 21:05:47.535236 | orchestrator | "", 2025-09-27 21:05:47.535247 | orchestrator | "Checking service: api (OSISM API Service)", 2025-09-27 21:05:47.535258 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-09-27 21:05:47.535268 | orchestrator | " Enabled: true", 2025-09-27 21:05:47.535279 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-09-27 21:05:47.535289 | orchestrator | " Status: ✅ MATCH", 2025-09-27 21:05:47.535300 | orchestrator | "", 2025-09-27 21:05:47.535310 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2025-09-27 21:05:47.535321 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-09-27 21:05:47.535331 | orchestrator | " Enabled: true", 2025-09-27 21:05:47.535342 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-09-27 21:05:47.535352 | orchestrator | " Status: ✅ MATCH", 2025-09-27 21:05:47.535363 | orchestrator | "", 2025-09-27 21:05:47.535373 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2025-09-27 21:05:47.535384 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-09-27 21:05:47.535394 | orchestrator | " Enabled: true", 2025-09-27 21:05:47.535405 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-09-27 21:05:47.535439 | orchestrator | " Status: ✅ MATCH", 2025-09-27 21:05:47.535456 | orchestrator | "", 2025-09-27 21:05:47.535466 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2025-09-27 21:05:47.535477 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-09-27 21:05:47.535487 | orchestrator | " Enabled: true", 2025-09-27 21:05:47.535498 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-09-27 21:05:47.535518 | orchestrator | " Status: ✅ MATCH", 2025-09-27 21:05:47.535529 | orchestrator | "", 2025-09-27 21:05:47.535540 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2025-09-27 21:05:47.535571 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-09-27 21:05:47.535582 | orchestrator | " Enabled: true", 2025-09-27 21:05:47.535592 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-09-27 21:05:47.535603 | orchestrator | " Status: ✅ MATCH", 2025-09-27 21:05:47.535613 | orchestrator | "", 2025-09-27 21:05:47.535624 | orchestrator | "=== Summary ===", 2025-09-27 21:05:47.535634 | orchestrator | "Errors (version mismatches): 0", 2025-09-27 21:05:47.535645 | orchestrator | "Warnings (expected containers not running): 0", 2025-09-27 21:05:47.535655 | orchestrator | "", 2025-09-27 21:05:47.535666 | orchestrator | "✅ All running containers match expected versions!" 2025-09-27 21:05:47.535677 | orchestrator | ] 2025-09-27 21:05:47.535687 | orchestrator | } 2025-09-27 21:05:47.535698 | orchestrator | 2025-09-27 21:05:47.535709 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2025-09-27 21:05:47.586840 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:05:47.586935 | orchestrator | 2025-09-27 21:05:47.586948 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:05:47.586963 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-09-27 21:05:47.586975 | orchestrator | 2025-09-27 21:05:47.697305 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-27 21:05:47.697467 | orchestrator | + deactivate 2025-09-27 21:05:47.697485 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-09-27 21:05:47.697500 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-27 21:05:47.697511 | orchestrator | + export PATH 2025-09-27 21:05:47.697522 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-09-27 21:05:47.697533 | orchestrator | + '[' -n '' ']' 2025-09-27 21:05:47.697544 | orchestrator | + hash -r 2025-09-27 21:05:47.697555 | orchestrator | + '[' -n '' ']' 2025-09-27 21:05:47.697565 | orchestrator | + unset VIRTUAL_ENV 2025-09-27 21:05:47.697576 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-09-27 21:05:47.697587 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-09-27 21:05:47.697597 | orchestrator | + unset -f deactivate 2025-09-27 21:05:47.697609 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-09-27 21:05:47.703624 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-27 21:05:47.703659 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-27 21:05:47.703670 | orchestrator | + local max_attempts=60 2025-09-27 21:05:47.703681 | orchestrator | + local name=ceph-ansible 2025-09-27 21:05:47.703692 | orchestrator | + local attempt_num=1 2025-09-27 21:05:47.704699 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-27 21:05:47.741358 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-27 21:05:47.741471 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-27 21:05:47.741485 | orchestrator | + local max_attempts=60 2025-09-27 21:05:47.741496 | orchestrator | + local name=kolla-ansible 2025-09-27 21:05:47.741507 | orchestrator | + local attempt_num=1 2025-09-27 21:05:47.742111 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-27 21:05:47.777598 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-27 21:05:47.777670 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-27 21:05:47.777682 | orchestrator | + local max_attempts=60 2025-09-27 21:05:47.777694 | orchestrator | + local name=osism-ansible 2025-09-27 21:05:47.777705 | orchestrator | + local attempt_num=1 2025-09-27 21:05:47.778599 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-27 21:05:47.819653 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-27 21:05:47.819735 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-27 21:05:47.819748 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-27 21:05:48.507494 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-09-27 21:05:48.711569 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-09-27 21:05:48.711702 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2025-09-27 21:05:48.711717 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2025-09-27 21:05:48.711728 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-09-27 21:05:48.711742 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up About a minute (healthy) 8000/tcp 2025-09-27 21:05:48.711754 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up About a minute (healthy) 2025-09-27 21:05:48.711765 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up About a minute (healthy) 2025-09-27 21:05:48.711793 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up 57 seconds (healthy) 2025-09-27 21:05:48.711805 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up About a minute (healthy) 2025-09-27 21:05:48.711816 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" mariadb 2 minutes ago Up About a minute (healthy) 3306/tcp 2025-09-27 21:05:48.711826 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up About a minute (healthy) 2025-09-27 21:05:48.711837 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis 2 minutes ago Up About a minute (healthy) 6379/tcp 2025-09-27 21:05:48.711848 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2025-09-27 21:05:48.711859 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up About a minute 192.168.16.5:3000->3000/tcp 2025-09-27 21:05:48.711869 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2025-09-27 21:05:48.711880 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up About a minute (healthy) 2025-09-27 21:05:48.715265 | orchestrator | ++ semver latest 7.0.0 2025-09-27 21:05:48.750299 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-27 21:05:48.750382 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-27 21:05:48.750396 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-09-27 21:05:48.752538 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-09-27 21:06:01.070497 | orchestrator | 2025-09-27 21:06:01 | INFO  | Task 05ab9ded-e1bc-4142-84c5-efd6f30a8781 (resolvconf) was prepared for execution. 2025-09-27 21:06:01.070642 | orchestrator | 2025-09-27 21:06:01 | INFO  | It takes a moment until task 05ab9ded-e1bc-4142-84c5-efd6f30a8781 (resolvconf) has been started and output is visible here. 2025-09-27 21:06:13.986810 | orchestrator | 2025-09-27 21:06:13.986895 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-09-27 21:06:13.986903 | orchestrator | 2025-09-27 21:06:13.986907 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-27 21:06:13.986913 | orchestrator | Saturday 27 September 2025 21:06:04 +0000 (0:00:00.107) 0:00:00.107 **** 2025-09-27 21:06:13.986918 | orchestrator | ok: [testbed-manager] 2025-09-27 21:06:13.986923 | orchestrator | 2025-09-27 21:06:13.986928 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-27 21:06:13.986933 | orchestrator | Saturday 27 September 2025 21:06:07 +0000 (0:00:03.375) 0:00:03.482 **** 2025-09-27 21:06:13.986937 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:06:13.986942 | orchestrator | 2025-09-27 21:06:13.986946 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-27 21:06:13.986951 | orchestrator | Saturday 27 September 2025 21:06:07 +0000 (0:00:00.061) 0:00:03.544 **** 2025-09-27 21:06:13.986955 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-09-27 21:06:13.986960 | orchestrator | 2025-09-27 21:06:13.986965 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-27 21:06:13.986969 | orchestrator | Saturday 27 September 2025 21:06:07 +0000 (0:00:00.065) 0:00:03.609 **** 2025-09-27 21:06:13.986979 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-09-27 21:06:13.986984 | orchestrator | 2025-09-27 21:06:13.986988 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-27 21:06:13.986992 | orchestrator | Saturday 27 September 2025 21:06:08 +0000 (0:00:00.077) 0:00:03.686 **** 2025-09-27 21:06:13.986996 | orchestrator | ok: [testbed-manager] 2025-09-27 21:06:13.987000 | orchestrator | 2025-09-27 21:06:13.987004 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-27 21:06:13.987008 | orchestrator | Saturday 27 September 2025 21:06:08 +0000 (0:00:00.863) 0:00:04.550 **** 2025-09-27 21:06:13.987012 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:06:13.987017 | orchestrator | 2025-09-27 21:06:13.987021 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-27 21:06:13.987025 | orchestrator | Saturday 27 September 2025 21:06:08 +0000 (0:00:00.056) 0:00:04.606 **** 2025-09-27 21:06:13.987029 | orchestrator | ok: [testbed-manager] 2025-09-27 21:06:13.987033 | orchestrator | 2025-09-27 21:06:13.987037 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-27 21:06:13.987041 | orchestrator | Saturday 27 September 2025 21:06:09 +0000 (0:00:00.412) 0:00:05.019 **** 2025-09-27 21:06:13.987045 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:06:13.987049 | orchestrator | 2025-09-27 21:06:13.987054 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-27 21:06:13.987059 | orchestrator | Saturday 27 September 2025 21:06:09 +0000 (0:00:00.060) 0:00:05.079 **** 2025-09-27 21:06:13.987063 | orchestrator | changed: [testbed-manager] 2025-09-27 21:06:13.987068 | orchestrator | 2025-09-27 21:06:13.987072 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-27 21:06:13.987076 | orchestrator | Saturday 27 September 2025 21:06:09 +0000 (0:00:00.462) 0:00:05.541 **** 2025-09-27 21:06:13.987080 | orchestrator | changed: [testbed-manager] 2025-09-27 21:06:13.987084 | orchestrator | 2025-09-27 21:06:13.987088 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-27 21:06:13.987092 | orchestrator | Saturday 27 September 2025 21:06:10 +0000 (0:00:00.957) 0:00:06.499 **** 2025-09-27 21:06:13.987096 | orchestrator | ok: [testbed-manager] 2025-09-27 21:06:13.987100 | orchestrator | 2025-09-27 21:06:13.987104 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-27 21:06:13.987108 | orchestrator | Saturday 27 September 2025 21:06:11 +0000 (0:00:00.837) 0:00:07.336 **** 2025-09-27 21:06:13.987128 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-09-27 21:06:13.987132 | orchestrator | 2025-09-27 21:06:13.987136 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-27 21:06:13.987140 | orchestrator | Saturday 27 September 2025 21:06:11 +0000 (0:00:00.069) 0:00:07.405 **** 2025-09-27 21:06:13.987144 | orchestrator | changed: [testbed-manager] 2025-09-27 21:06:13.987148 | orchestrator | 2025-09-27 21:06:13.987152 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:06:13.987157 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-27 21:06:13.987161 | orchestrator | 2025-09-27 21:06:13.987166 | orchestrator | 2025-09-27 21:06:13.987170 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:06:13.987174 | orchestrator | Saturday 27 September 2025 21:06:13 +0000 (0:00:02.025) 0:00:09.430 **** 2025-09-27 21:06:13.987178 | orchestrator | =============================================================================== 2025-09-27 21:06:13.987182 | orchestrator | Gathering Facts --------------------------------------------------------- 3.38s 2025-09-27 21:06:13.987186 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 2.03s 2025-09-27 21:06:13.987190 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 0.96s 2025-09-27 21:06:13.987194 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.86s 2025-09-27 21:06:13.987198 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.84s 2025-09-27 21:06:13.987202 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.46s 2025-09-27 21:06:13.987218 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.41s 2025-09-27 21:06:13.987222 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2025-09-27 21:06:13.987226 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2025-09-27 21:06:13.987230 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.07s 2025-09-27 21:06:13.987235 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-09-27 21:06:13.987239 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.06s 2025-09-27 21:06:13.987243 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-09-27 21:06:14.246314 | orchestrator | + osism apply sshconfig 2025-09-27 21:06:26.308516 | orchestrator | 2025-09-27 21:06:26 | INFO  | Task cc827b0a-cf5f-4406-8e48-5e9e776be1b3 (sshconfig) was prepared for execution. 2025-09-27 21:06:26.308631 | orchestrator | 2025-09-27 21:06:26 | INFO  | It takes a moment until task cc827b0a-cf5f-4406-8e48-5e9e776be1b3 (sshconfig) has been started and output is visible here. 2025-09-27 21:06:36.703669 | orchestrator | 2025-09-27 21:06:36.703794 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-09-27 21:06:36.703811 | orchestrator | 2025-09-27 21:06:36.703823 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-09-27 21:06:36.703835 | orchestrator | Saturday 27 September 2025 21:06:29 +0000 (0:00:00.118) 0:00:00.118 **** 2025-09-27 21:06:36.703846 | orchestrator | ok: [testbed-manager] 2025-09-27 21:06:36.703858 | orchestrator | 2025-09-27 21:06:36.703869 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-09-27 21:06:36.703880 | orchestrator | Saturday 27 September 2025 21:06:30 +0000 (0:00:00.469) 0:00:00.588 **** 2025-09-27 21:06:36.703890 | orchestrator | changed: [testbed-manager] 2025-09-27 21:06:36.703901 | orchestrator | 2025-09-27 21:06:36.703912 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-09-27 21:06:36.703923 | orchestrator | Saturday 27 September 2025 21:06:30 +0000 (0:00:00.409) 0:00:00.998 **** 2025-09-27 21:06:36.703964 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-09-27 21:06:36.703976 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-09-27 21:06:36.703986 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-09-27 21:06:36.703997 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-09-27 21:06:36.704008 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-09-27 21:06:36.704018 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-09-27 21:06:36.704028 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-09-27 21:06:36.704039 | orchestrator | 2025-09-27 21:06:36.704050 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-09-27 21:06:36.704060 | orchestrator | Saturday 27 September 2025 21:06:35 +0000 (0:00:05.141) 0:00:06.139 **** 2025-09-27 21:06:36.704071 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:06:36.704081 | orchestrator | 2025-09-27 21:06:36.704092 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-09-27 21:06:36.704102 | orchestrator | Saturday 27 September 2025 21:06:35 +0000 (0:00:00.060) 0:00:06.200 **** 2025-09-27 21:06:36.704113 | orchestrator | changed: [testbed-manager] 2025-09-27 21:06:36.704123 | orchestrator | 2025-09-27 21:06:36.704134 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:06:36.704146 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-27 21:06:36.704157 | orchestrator | 2025-09-27 21:06:36.704168 | orchestrator | 2025-09-27 21:06:36.704180 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:06:36.704191 | orchestrator | Saturday 27 September 2025 21:06:36 +0000 (0:00:00.564) 0:00:06.765 **** 2025-09-27 21:06:36.704203 | orchestrator | =============================================================================== 2025-09-27 21:06:36.704215 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.14s 2025-09-27 21:06:36.704227 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.56s 2025-09-27 21:06:36.704240 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.47s 2025-09-27 21:06:36.704252 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.41s 2025-09-27 21:06:36.704264 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.06s 2025-09-27 21:06:36.953607 | orchestrator | + osism apply known-hosts 2025-09-27 21:06:48.901140 | orchestrator | 2025-09-27 21:06:48 | INFO  | Task 9b67b753-fc4f-4c53-b8cb-c6143ad94dfa (known-hosts) was prepared for execution. 2025-09-27 21:06:48.901209 | orchestrator | 2025-09-27 21:06:48 | INFO  | It takes a moment until task 9b67b753-fc4f-4c53-b8cb-c6143ad94dfa (known-hosts) has been started and output is visible here. 2025-09-27 21:07:05.008887 | orchestrator | 2025-09-27 21:07:05.008941 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-09-27 21:07:05.008947 | orchestrator | 2025-09-27 21:07:05.008952 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-09-27 21:07:05.008957 | orchestrator | Saturday 27 September 2025 21:06:52 +0000 (0:00:00.167) 0:00:00.167 **** 2025-09-27 21:07:05.008962 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-27 21:07:05.008967 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-27 21:07:05.008971 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-27 21:07:05.008975 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-27 21:07:05.008979 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-27 21:07:05.008983 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-27 21:07:05.008987 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-27 21:07:05.008991 | orchestrator | 2025-09-27 21:07:05.008995 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-09-27 21:07:05.009010 | orchestrator | Saturday 27 September 2025 21:06:58 +0000 (0:00:05.783) 0:00:05.951 **** 2025-09-27 21:07:05.009020 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-27 21:07:05.009026 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-27 21:07:05.009030 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-27 21:07:05.009033 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-27 21:07:05.009037 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-27 21:07:05.009042 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-27 21:07:05.009045 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-27 21:07:05.009049 | orchestrator | 2025-09-27 21:07:05.009053 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-27 21:07:05.009057 | orchestrator | Saturday 27 September 2025 21:06:58 +0000 (0:00:00.164) 0:00:06.116 **** 2025-09-27 21:07:05.009061 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDytRo2Hj11uH+NSaVEDZODM/T8Gte8N/SsmWaNrECnf) 2025-09-27 21:07:05.009068 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDlIkMpUqDbIu2vcUn0VezHblCW3+XFeW4VfbbKHDi0uWb7IUxG1xiA5a36SqPygaoqcIALdVatygUeL6K6M1EjKESnkn/2+DcZKLL9jZMn+495oOiTSKao9+9eoPjorPuoNUYIREdfW2jTIbsUyHp59lfPJXk1JEN2BcEubLtoE7rrJ3SVl5e1v+TCmUH/MWdmVoAlW6gwXw2OBp/HB8hgumi4P7mVlx4Wh+G4pw3MaxP98078lH3MxGhAclmHeeOivGZoMclOVEEdQo/1RfjCYFvks6M4n/u1bSLE5e7VacW5awXUt29QenSn/2hrPUaEvsRmcFXL24cX6Bzxt7Tkp6Wg87GW4o3X3BhQIlZCruc67O25d4L2hOSi4PEhWNQ2sCZOG6mLIsBzCSZFykMDAmD5iP/hkWrozCSuK21htlxrYYRJsWmxGURjTaNC6knSrlj4qGEkJF/OM7mCiAS1cZjl9ii/+p/HUxq8SJ0FhIMBDKN8f1sAwRYj29OLQTc=) 2025-09-27 21:07:05.009074 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGIoOr+krV0JQ3IB1qduNDh+nawaP+HdOo5arIcyXntnnANiNpIbyVQVlYjuArqrfHt/R5DYQHLlz2cpBlG96p8=) 2025-09-27 21:07:05.009080 | orchestrator | 2025-09-27 21:07:05.009083 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-27 21:07:05.009087 | orchestrator | Saturday 27 September 2025 21:06:59 +0000 (0:00:01.123) 0:00:07.240 **** 2025-09-27 21:07:05.009099 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDcYvCtFPQriXZfV0XyV28u7iVFMFZOZjHGMvf60n6gtdtIP9UgvFHcxhdscont2n8wEKcg/8jQJ9+Sph0G3U2FL4Mrlm7rKhACKMuh9nlfFm94cGItEFmbbrPNHZgeM5WXW+C3k9Lw1//5CzLQJYvf1eqO6clkK4JUpA009+0/CsLb+R4CagIzr8bin3AnLEtjBoTvNYPDPFfqYyZOUzn4dAYO8lnacVYhzHpiQ8XbXuAB1zI9upjcsuS7y1hMecl/4E2cKH0OpRjPixQ6iIXUO7fBUGsSaMOryhXrb9r7GbE4Hv8XXXt9daCWOOlOfHEijQ4HHRYjymf73sKvBXwBV3CG6Y5/fGrth0wYtnUGatePHuBErbE1kMCyfcrkC0PFC0mF3mKNDqfejafYqF6p8/4ZYWzgjK/BE+U5/tx4+hBP9H1VM7NXiPCzE37ZtWUvu3eOumFkiI/EMpiTZX0H2Y9bxqUIKqvfu76RRkLyJRTlKohdVlAq+SePO6aNsNU=) 2025-09-27 21:07:05.009104 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNyg0lSp9P+cV1jwwbX6iD8gCZO74DzeYyAYfflp7lIvst9P9c52Uv0I/VGd3d1oadp108QWlnUVKObTiNvMhck=) 2025-09-27 21:07:05.009112 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ04EAZ0ihx4jCxglFDfVF2Yxk+QaCtAUmcksM1dzWTY) 2025-09-27 21:07:05.009116 | orchestrator | 2025-09-27 21:07:05.009120 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-27 21:07:05.009124 | orchestrator | Saturday 27 September 2025 21:07:00 +0000 (0:00:01.027) 0:00:08.267 **** 2025-09-27 21:07:05.009127 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRVpz+m0GXhjPkT8Pk2DOWKag9Vf3ylx2dSP4WLgQ0qLGhnVJFku2I9SsA/n6EZhoh1ZSEblzzSnXwkYSkYKKfD0VdfVWE21YMFuv7H3zunRkwrOyFG+e7DNReRmz6QahVUFk+hIdadpjIi7q7F3PwV9KFisxxrxMRV+dO3wJPe1OV7BK7oRFIEbKJYR8r19LAPrpW/bwkT4XjjyBloiHV02HoCQZrJ3k4NRKrRpVmFRUE7t+qKL52aEgm2hswrHLxXWHberhtZIxbVoRGx7YjzAAFNhd6yZBw1T8P3EsqYKZyCCNUqdYUoFUMrXi3Vq5vtY/BCEI1JB7DfJLtx+8yaLEzaf5m54p7sIrDtTas3TdIicJAHLmP8Z4H6BHPhqyt8u66eGqjdK4p5uKYJyBTEX4Pb6t4ftGjqNGFl3JzDgchy21piYZOp+80IlVk8KkmXD8Qmd1ZW92+HnUDuKe/e0HROqkAe5f7WeZW7IkrBfwTe4XM8J4ewI1NMKxGunc=) 2025-09-27 21:07:05.009131 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDtl3NQ7S87nzhJ7QA1jYAXxQuW+qLsn/LiR3QzxZJv02Lcezb4Di87PO1OT2C/0bvWnY6pklPlKYdBM8XW7c5U=) 2025-09-27 21:07:05.009169 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPrefXWSsBOBA257ARyWt/u29gB5jIqBa9iXE60WI1kz) 2025-09-27 21:07:05.009173 | orchestrator | 2025-09-27 21:07:05.009176 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-27 21:07:05.009180 | orchestrator | Saturday 27 September 2025 21:07:01 +0000 (0:00:01.012) 0:00:09.280 **** 2025-09-27 21:07:05.009186 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPDY7HnAu/Z0LytbkSiAMAE0pbmJ0l5qs8UEqIZqc2sceiubvNPTkna8HCrzOd+vZBe4JFnrYn05Grg9vOqWcu4=) 2025-09-27 21:07:05.009190 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDtWRm/z4DV4TR/WszkNEYHV/Ll4f/G8qIbkD0fi64oRB76MOvfRcs8C+kJW0kUlV/lo75ndCENomS+KKRLCebwRRcODlXg3RjrQUQl6oM/FyO/jplkEQWCthEcnMxI3U0s7VYkuPFTzZtNfpgSutxskFcGsnZhzjvIy92aycFW4eOHabpUChdkADoMclOdj3x8KnIZcHld0PIflDnoZuW+seEWGYk3WGmPuIfrGnQNck8Z+qGLTVjOOCKp9fxLVIlOkxsDscoLftusHyO0PQ5KyxcK6WOHUTlCixzlwsfCTq1O9Oo1MUj5TonmEf0trgtX2yOVOJ56CF66ac8aS/eYxUTqsSKtNXPdUgiDr4hzt2WTd/JIMSyQVr4bKdGcwY5B3HeA9cl8wvvzheA5yFF9YjZJvlcHfqVdV6rXlU0PUHL1fEg1LvXSrjdsdkYRPTOdtFoddPgec/4vlg7B886gM8OjXemb3k8vSCxJgNtpYmFQ9k15Kembhh3paNFbL80=) 2025-09-27 21:07:05.009194 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGQbISF4zIHqtJ2el92VN+90sb7EDmXL1xsOpQgwHv1W) 2025-09-27 21:07:05.009198 | orchestrator | 2025-09-27 21:07:05.009202 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-27 21:07:05.009205 | orchestrator | Saturday 27 September 2025 21:07:02 +0000 (0:00:01.027) 0:00:10.307 **** 2025-09-27 21:07:05.009209 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDHWiNx+OsMVRgIsYBFTjjiVVF9+9OSgZvqAuu0ZNtvcE8YF5zUeQkG0Xvg6Ret3AfMJQmbwnqQddCuFtxXDdu9tppN7hxe09dAUg0PUDlIwkctLdqhSun4a/QXDn1rN0L9J58i1YT/knJNPUgQanpwrAqYGoVDLuJG4+eSJEXIyQ7S/nipf7ICIDYslPSGyI3wboUJUrL0f8Zioo0jQNqeRHU0GXDX0kShLyEndaHTQa301sg2Lni2RtX/DU+APtO5egSPO8ev01/U+FCEbMbRWtFnVL0YdHBESosetyEzaI74jfcYx3HEZQuaUfbdp0L79v2E6AP6ENaZF2ZZKmpwuwK2KMP5w25tPAF6f0eVFR5bNaSWspHhTjkKfYsd+G8jV1XDXOiBwF7vnirMD+XfRYM1wdrGr5LpV/JfMmx9wOMDMF658uBiJAc3jmLsfWfzWAiRcZX6dmTTAEX/RnZsOy1C/fAiN3qeLJy24U/2f6f6ppNr7QAv0j0VfP0CKNU=) 2025-09-27 21:07:05.009213 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJLtwjP4ybbdik1lHEACmyeobb4Zx0x4pZBaKH7QqagDtwXEI6rpvQ5rhDbzpzxVBbizZNa+WB6YfPp4us5tZcA=) 2025-09-27 21:07:05.009221 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGALbpyK/9I06hoDdtI186tU0ztdtfXkMdKxHKtczGQH) 2025-09-27 21:07:05.009225 | orchestrator | 2025-09-27 21:07:05.009228 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-27 21:07:05.009232 | orchestrator | Saturday 27 September 2025 21:07:03 +0000 (0:00:01.039) 0:00:11.347 **** 2025-09-27 21:07:05.009239 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDaOjbc8WgS6rR2uPMy1u4DPhlrzSDNpInQPh8J6S1SUIk3Z5s8enRLAbHIB1fbsrIb7gc0PdogXYJphfSSyrNPqetXY5nZfFLiJBIeCGJWqwYYh4H6DkfnnIZrhcWbgNBbdq8fe0s015XzXLrvxVPxysAoI6xrDUEc8yUqsVd43zuVlZ3GwtuC8NecFJZTMyWKozy+3xcLXucQc5gIkptSnH8ntM4oHhhjmVFszd/aBR0cKepZN7gvmi8ZSBr3lK2egMopFY1QSFk531BrRzaNw1VJ8iaqmVtT451QewZlNwe1ePtlJuLVNd9LN9usuRYSzBPM6yL2pnkTjg/xZoG8cFDAjJJNGtP4LkuIlDJU+xg3AxUk2r0H6OHinlPo/psCmfFXU6TCYmRzheXf+L07Y3DS46ajp3B9tKS/2nMFbqdfJO9v0CjzsV8D2YVjTykklEdbFIBPTUI8JwJ123aA0gMPBXDiBrKICV92CczRYueslg6XbOm9+j0qWwfihJ8=) 2025-09-27 21:07:15.467217 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHKECkTytDvOOqstBz0D6ItMR5Al+UHvpF5wl+bmYIKG46Jph6cr3+AZ22EpaNOqqGM47JcUt8xyfzMDFhl1K5k=) 2025-09-27 21:07:15.467332 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJqQBD6Ue9c7Cvki+1BEsP/PhWSSkQDj+zp6zVxRanvM) 2025-09-27 21:07:15.467347 | orchestrator | 2025-09-27 21:07:15.467358 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-27 21:07:15.467370 | orchestrator | Saturday 27 September 2025 21:07:04 +0000 (0:00:01.020) 0:00:12.368 **** 2025-09-27 21:07:15.467381 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC57kM8lWP5vAPCmKvLzl+hkYa+JhoIPrZhDvHsZeFK817lq+vzciwulj1UjDwI0ipGDwRR20Vn4aAYUvVGfk44Vc0tVpJpruF801d4x8tiWivwHBAdHfXgw3FIMQ2SXYeAP0Trzp8euTeS7eV2xWuDKrKfXhHX8oowHNorjUpbRg0tcnc1jxY4kuQA5cqwic9/z0xa8t/6/aWbY+2nPMmdovR++QovobwHmlYj50vElTuMM2zvsxpLJyR+NZy+Dy1P0Drfa6P3qlQnq+pc7xfNmdI7RaJwrZNzkpCggLsIReLXn2QNDCJfyK/mUAEYaJXXukotAMOfcL8zbtRv4jHJl/e105ZvieuTH/liAUYYSH30bUcrohY/c1b2AmSYU5zTIyN4dEczhu9qyFKL+AE0KmcsBc7/7kMq4aZTRsXQetoflAWkHYkVQrCmREZfGfcuPK/HnXGdmspuucvRu10yOyIv+4HPUchkyzjzKtDTLpr8uxpnLJ+2Nlacc+Ut0L0=) 2025-09-27 21:07:15.467394 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICJ0t/XCLIt5f2nfXTXzUETWTGZbic8gdLacGwgPX6Hy) 2025-09-27 21:07:15.467404 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKgWiZIikmvZdw4cgHQnzUhSpoj5V0bqdxH4jUunf019INzLqGl9/MMfNC5D+bmDsjya5zV8lxyuQ7xhi/tEFjY=) 2025-09-27 21:07:15.467414 | orchestrator | 2025-09-27 21:07:15.467424 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-09-27 21:07:15.467435 | orchestrator | Saturday 27 September 2025 21:07:06 +0000 (0:00:01.031) 0:00:13.399 **** 2025-09-27 21:07:15.467446 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-27 21:07:15.467504 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-27 21:07:15.467514 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-27 21:07:15.467524 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-27 21:07:15.467534 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-27 21:07:15.467563 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-27 21:07:15.467573 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-27 21:07:15.467582 | orchestrator | 2025-09-27 21:07:15.467592 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-09-27 21:07:15.467603 | orchestrator | Saturday 27 September 2025 21:07:11 +0000 (0:00:05.164) 0:00:18.563 **** 2025-09-27 21:07:15.467614 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-27 21:07:15.467650 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-27 21:07:15.467660 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-27 21:07:15.467670 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-27 21:07:15.467679 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-27 21:07:15.467689 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-27 21:07:15.467699 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-27 21:07:15.467708 | orchestrator | 2025-09-27 21:07:15.467718 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-27 21:07:15.467727 | orchestrator | Saturday 27 September 2025 21:07:11 +0000 (0:00:00.160) 0:00:18.724 **** 2025-09-27 21:07:15.467737 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDytRo2Hj11uH+NSaVEDZODM/T8Gte8N/SsmWaNrECnf) 2025-09-27 21:07:15.467770 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDlIkMpUqDbIu2vcUn0VezHblCW3+XFeW4VfbbKHDi0uWb7IUxG1xiA5a36SqPygaoqcIALdVatygUeL6K6M1EjKESnkn/2+DcZKLL9jZMn+495oOiTSKao9+9eoPjorPuoNUYIREdfW2jTIbsUyHp59lfPJXk1JEN2BcEubLtoE7rrJ3SVl5e1v+TCmUH/MWdmVoAlW6gwXw2OBp/HB8hgumi4P7mVlx4Wh+G4pw3MaxP98078lH3MxGhAclmHeeOivGZoMclOVEEdQo/1RfjCYFvks6M4n/u1bSLE5e7VacW5awXUt29QenSn/2hrPUaEvsRmcFXL24cX6Bzxt7Tkp6Wg87GW4o3X3BhQIlZCruc67O25d4L2hOSi4PEhWNQ2sCZOG6mLIsBzCSZFykMDAmD5iP/hkWrozCSuK21htlxrYYRJsWmxGURjTaNC6knSrlj4qGEkJF/OM7mCiAS1cZjl9ii/+p/HUxq8SJ0FhIMBDKN8f1sAwRYj29OLQTc=) 2025-09-27 21:07:15.467784 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGIoOr+krV0JQ3IB1qduNDh+nawaP+HdOo5arIcyXntnnANiNpIbyVQVlYjuArqrfHt/R5DYQHLlz2cpBlG96p8=) 2025-09-27 21:07:15.467795 | orchestrator | 2025-09-27 21:07:15.467806 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-27 21:07:15.467817 | orchestrator | Saturday 27 September 2025 21:07:12 +0000 (0:00:01.053) 0:00:19.778 **** 2025-09-27 21:07:15.467828 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDcYvCtFPQriXZfV0XyV28u7iVFMFZOZjHGMvf60n6gtdtIP9UgvFHcxhdscont2n8wEKcg/8jQJ9+Sph0G3U2FL4Mrlm7rKhACKMuh9nlfFm94cGItEFmbbrPNHZgeM5WXW+C3k9Lw1//5CzLQJYvf1eqO6clkK4JUpA009+0/CsLb+R4CagIzr8bin3AnLEtjBoTvNYPDPFfqYyZOUzn4dAYO8lnacVYhzHpiQ8XbXuAB1zI9upjcsuS7y1hMecl/4E2cKH0OpRjPixQ6iIXUO7fBUGsSaMOryhXrb9r7GbE4Hv8XXXt9daCWOOlOfHEijQ4HHRYjymf73sKvBXwBV3CG6Y5/fGrth0wYtnUGatePHuBErbE1kMCyfcrkC0PFC0mF3mKNDqfejafYqF6p8/4ZYWzgjK/BE+U5/tx4+hBP9H1VM7NXiPCzE37ZtWUvu3eOumFkiI/EMpiTZX0H2Y9bxqUIKqvfu76RRkLyJRTlKohdVlAq+SePO6aNsNU=) 2025-09-27 21:07:15.467840 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNyg0lSp9P+cV1jwwbX6iD8gCZO74DzeYyAYfflp7lIvst9P9c52Uv0I/VGd3d1oadp108QWlnUVKObTiNvMhck=) 2025-09-27 21:07:15.467852 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ04EAZ0ihx4jCxglFDfVF2Yxk+QaCtAUmcksM1dzWTY) 2025-09-27 21:07:15.467863 | orchestrator | 2025-09-27 21:07:15.467880 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-27 21:07:15.467891 | orchestrator | Saturday 27 September 2025 21:07:13 +0000 (0:00:01.020) 0:00:20.798 **** 2025-09-27 21:07:15.467902 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDtl3NQ7S87nzhJ7QA1jYAXxQuW+qLsn/LiR3QzxZJv02Lcezb4Di87PO1OT2C/0bvWnY6pklPlKYdBM8XW7c5U=) 2025-09-27 21:07:15.467914 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRVpz+m0GXhjPkT8Pk2DOWKag9Vf3ylx2dSP4WLgQ0qLGhnVJFku2I9SsA/n6EZhoh1ZSEblzzSnXwkYSkYKKfD0VdfVWE21YMFuv7H3zunRkwrOyFG+e7DNReRmz6QahVUFk+hIdadpjIi7q7F3PwV9KFisxxrxMRV+dO3wJPe1OV7BK7oRFIEbKJYR8r19LAPrpW/bwkT4XjjyBloiHV02HoCQZrJ3k4NRKrRpVmFRUE7t+qKL52aEgm2hswrHLxXWHberhtZIxbVoRGx7YjzAAFNhd6yZBw1T8P3EsqYKZyCCNUqdYUoFUMrXi3Vq5vtY/BCEI1JB7DfJLtx+8yaLEzaf5m54p7sIrDtTas3TdIicJAHLmP8Z4H6BHPhqyt8u66eGqjdK4p5uKYJyBTEX4Pb6t4ftGjqNGFl3JzDgchy21piYZOp+80IlVk8KkmXD8Qmd1ZW92+HnUDuKe/e0HROqkAe5f7WeZW7IkrBfwTe4XM8J4ewI1NMKxGunc=) 2025-09-27 21:07:15.467925 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPrefXWSsBOBA257ARyWt/u29gB5jIqBa9iXE60WI1kz) 2025-09-27 21:07:15.467935 | orchestrator | 2025-09-27 21:07:15.467946 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-27 21:07:15.467957 | orchestrator | Saturday 27 September 2025 21:07:14 +0000 (0:00:01.029) 0:00:21.828 **** 2025-09-27 21:07:15.467968 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPDY7HnAu/Z0LytbkSiAMAE0pbmJ0l5qs8UEqIZqc2sceiubvNPTkna8HCrzOd+vZBe4JFnrYn05Grg9vOqWcu4=) 2025-09-27 21:07:15.467985 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDtWRm/z4DV4TR/WszkNEYHV/Ll4f/G8qIbkD0fi64oRB76MOvfRcs8C+kJW0kUlV/lo75ndCENomS+KKRLCebwRRcODlXg3RjrQUQl6oM/FyO/jplkEQWCthEcnMxI3U0s7VYkuPFTzZtNfpgSutxskFcGsnZhzjvIy92aycFW4eOHabpUChdkADoMclOdj3x8KnIZcHld0PIflDnoZuW+seEWGYk3WGmPuIfrGnQNck8Z+qGLTVjOOCKp9fxLVIlOkxsDscoLftusHyO0PQ5KyxcK6WOHUTlCixzlwsfCTq1O9Oo1MUj5TonmEf0trgtX2yOVOJ56CF66ac8aS/eYxUTqsSKtNXPdUgiDr4hzt2WTd/JIMSyQVr4bKdGcwY5B3HeA9cl8wvvzheA5yFF9YjZJvlcHfqVdV6rXlU0PUHL1fEg1LvXSrjdsdkYRPTOdtFoddPgec/4vlg7B886gM8OjXemb3k8vSCxJgNtpYmFQ9k15Kembhh3paNFbL80=) 2025-09-27 21:07:15.468009 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGQbISF4zIHqtJ2el92VN+90sb7EDmXL1xsOpQgwHv1W) 2025-09-27 21:07:19.622886 | orchestrator | 2025-09-27 21:07:19.623003 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-27 21:07:19.623039 | orchestrator | Saturday 27 September 2025 21:07:15 +0000 (0:00:00.997) 0:00:22.826 **** 2025-09-27 21:07:19.623055 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDHWiNx+OsMVRgIsYBFTjjiVVF9+9OSgZvqAuu0ZNtvcE8YF5zUeQkG0Xvg6Ret3AfMJQmbwnqQddCuFtxXDdu9tppN7hxe09dAUg0PUDlIwkctLdqhSun4a/QXDn1rN0L9J58i1YT/knJNPUgQanpwrAqYGoVDLuJG4+eSJEXIyQ7S/nipf7ICIDYslPSGyI3wboUJUrL0f8Zioo0jQNqeRHU0GXDX0kShLyEndaHTQa301sg2Lni2RtX/DU+APtO5egSPO8ev01/U+FCEbMbRWtFnVL0YdHBESosetyEzaI74jfcYx3HEZQuaUfbdp0L79v2E6AP6ENaZF2ZZKmpwuwK2KMP5w25tPAF6f0eVFR5bNaSWspHhTjkKfYsd+G8jV1XDXOiBwF7vnirMD+XfRYM1wdrGr5LpV/JfMmx9wOMDMF658uBiJAc3jmLsfWfzWAiRcZX6dmTTAEX/RnZsOy1C/fAiN3qeLJy24U/2f6f6ppNr7QAv0j0VfP0CKNU=) 2025-09-27 21:07:19.623072 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJLtwjP4ybbdik1lHEACmyeobb4Zx0x4pZBaKH7QqagDtwXEI6rpvQ5rhDbzpzxVBbizZNa+WB6YfPp4us5tZcA=) 2025-09-27 21:07:19.623085 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGALbpyK/9I06hoDdtI186tU0ztdtfXkMdKxHKtczGQH) 2025-09-27 21:07:19.623097 | orchestrator | 2025-09-27 21:07:19.623108 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-27 21:07:19.623136 | orchestrator | Saturday 27 September 2025 21:07:16 +0000 (0:00:01.014) 0:00:23.840 **** 2025-09-27 21:07:19.623172 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDaOjbc8WgS6rR2uPMy1u4DPhlrzSDNpInQPh8J6S1SUIk3Z5s8enRLAbHIB1fbsrIb7gc0PdogXYJphfSSyrNPqetXY5nZfFLiJBIeCGJWqwYYh4H6DkfnnIZrhcWbgNBbdq8fe0s015XzXLrvxVPxysAoI6xrDUEc8yUqsVd43zuVlZ3GwtuC8NecFJZTMyWKozy+3xcLXucQc5gIkptSnH8ntM4oHhhjmVFszd/aBR0cKepZN7gvmi8ZSBr3lK2egMopFY1QSFk531BrRzaNw1VJ8iaqmVtT451QewZlNwe1ePtlJuLVNd9LN9usuRYSzBPM6yL2pnkTjg/xZoG8cFDAjJJNGtP4LkuIlDJU+xg3AxUk2r0H6OHinlPo/psCmfFXU6TCYmRzheXf+L07Y3DS46ajp3B9tKS/2nMFbqdfJO9v0CjzsV8D2YVjTykklEdbFIBPTUI8JwJ123aA0gMPBXDiBrKICV92CczRYueslg6XbOm9+j0qWwfihJ8=) 2025-09-27 21:07:19.623184 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHKECkTytDvOOqstBz0D6ItMR5Al+UHvpF5wl+bmYIKG46Jph6cr3+AZ22EpaNOqqGM47JcUt8xyfzMDFhl1K5k=) 2025-09-27 21:07:19.623195 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJqQBD6Ue9c7Cvki+1BEsP/PhWSSkQDj+zp6zVxRanvM) 2025-09-27 21:07:19.623206 | orchestrator | 2025-09-27 21:07:19.623216 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-27 21:07:19.623227 | orchestrator | Saturday 27 September 2025 21:07:17 +0000 (0:00:01.023) 0:00:24.864 **** 2025-09-27 21:07:19.623238 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC57kM8lWP5vAPCmKvLzl+hkYa+JhoIPrZhDvHsZeFK817lq+vzciwulj1UjDwI0ipGDwRR20Vn4aAYUvVGfk44Vc0tVpJpruF801d4x8tiWivwHBAdHfXgw3FIMQ2SXYeAP0Trzp8euTeS7eV2xWuDKrKfXhHX8oowHNorjUpbRg0tcnc1jxY4kuQA5cqwic9/z0xa8t/6/aWbY+2nPMmdovR++QovobwHmlYj50vElTuMM2zvsxpLJyR+NZy+Dy1P0Drfa6P3qlQnq+pc7xfNmdI7RaJwrZNzkpCggLsIReLXn2QNDCJfyK/mUAEYaJXXukotAMOfcL8zbtRv4jHJl/e105ZvieuTH/liAUYYSH30bUcrohY/c1b2AmSYU5zTIyN4dEczhu9qyFKL+AE0KmcsBc7/7kMq4aZTRsXQetoflAWkHYkVQrCmREZfGfcuPK/HnXGdmspuucvRu10yOyIv+4HPUchkyzjzKtDTLpr8uxpnLJ+2Nlacc+Ut0L0=) 2025-09-27 21:07:19.623249 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKgWiZIikmvZdw4cgHQnzUhSpoj5V0bqdxH4jUunf019INzLqGl9/MMfNC5D+bmDsjya5zV8lxyuQ7xhi/tEFjY=) 2025-09-27 21:07:19.623260 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICJ0t/XCLIt5f2nfXTXzUETWTGZbic8gdLacGwgPX6Hy) 2025-09-27 21:07:19.623270 | orchestrator | 2025-09-27 21:07:19.623281 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-09-27 21:07:19.623291 | orchestrator | Saturday 27 September 2025 21:07:18 +0000 (0:00:00.997) 0:00:25.861 **** 2025-09-27 21:07:19.623302 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-27 21:07:19.623313 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-27 21:07:19.623324 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-27 21:07:19.623335 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-27 21:07:19.623345 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-27 21:07:19.623356 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-27 21:07:19.623366 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-27 21:07:19.623377 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:07:19.623388 | orchestrator | 2025-09-27 21:07:19.623417 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-09-27 21:07:19.623430 | orchestrator | Saturday 27 September 2025 21:07:18 +0000 (0:00:00.145) 0:00:26.007 **** 2025-09-27 21:07:19.623443 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:07:19.623507 | orchestrator | 2025-09-27 21:07:19.623521 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-09-27 21:07:19.623533 | orchestrator | Saturday 27 September 2025 21:07:18 +0000 (0:00:00.069) 0:00:26.077 **** 2025-09-27 21:07:19.623557 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:07:19.623570 | orchestrator | 2025-09-27 21:07:19.623591 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-09-27 21:07:19.623603 | orchestrator | Saturday 27 September 2025 21:07:18 +0000 (0:00:00.067) 0:00:26.144 **** 2025-09-27 21:07:19.623615 | orchestrator | changed: [testbed-manager] 2025-09-27 21:07:19.623626 | orchestrator | 2025-09-27 21:07:19.623638 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:07:19.623650 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-27 21:07:19.623663 | orchestrator | 2025-09-27 21:07:19.623675 | orchestrator | 2025-09-27 21:07:19.623687 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:07:19.623699 | orchestrator | Saturday 27 September 2025 21:07:19 +0000 (0:00:00.622) 0:00:26.767 **** 2025-09-27 21:07:19.623712 | orchestrator | =============================================================================== 2025-09-27 21:07:19.623722 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.78s 2025-09-27 21:07:19.623733 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.16s 2025-09-27 21:07:19.623745 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-09-27 21:07:19.623756 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-09-27 21:07:19.623766 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-09-27 21:07:19.623777 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-09-27 21:07:19.623787 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-09-27 21:07:19.623798 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-09-27 21:07:19.623808 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-09-27 21:07:19.623819 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-09-27 21:07:19.623830 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-09-27 21:07:19.623840 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-09-27 21:07:19.623851 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-09-27 21:07:19.623869 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-09-27 21:07:19.623880 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-09-27 21:07:19.623891 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-09-27 21:07:19.623901 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.62s 2025-09-27 21:07:19.623912 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2025-09-27 21:07:19.623923 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2025-09-27 21:07:19.623934 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.15s 2025-09-27 21:07:19.879802 | orchestrator | + osism apply squid 2025-09-27 21:07:31.817979 | orchestrator | 2025-09-27 21:07:31 | INFO  | Task c919f891-e513-4751-a095-de503425f907 (squid) was prepared for execution. 2025-09-27 21:07:31.818149 | orchestrator | 2025-09-27 21:07:31 | INFO  | It takes a moment until task c919f891-e513-4751-a095-de503425f907 (squid) has been started and output is visible here. 2025-09-27 21:09:23.733112 | orchestrator | 2025-09-27 21:09:23.733260 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-09-27 21:09:23.733277 | orchestrator | 2025-09-27 21:09:23.733289 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-09-27 21:09:23.733301 | orchestrator | Saturday 27 September 2025 21:07:35 +0000 (0:00:00.126) 0:00:00.126 **** 2025-09-27 21:09:23.733313 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-09-27 21:09:23.733356 | orchestrator | 2025-09-27 21:09:23.733368 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-09-27 21:09:23.733380 | orchestrator | Saturday 27 September 2025 21:07:35 +0000 (0:00:00.068) 0:00:00.194 **** 2025-09-27 21:09:23.733392 | orchestrator | ok: [testbed-manager] 2025-09-27 21:09:23.733404 | orchestrator | 2025-09-27 21:09:23.733415 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-09-27 21:09:23.733426 | orchestrator | Saturday 27 September 2025 21:07:36 +0000 (0:00:01.101) 0:00:01.296 **** 2025-09-27 21:09:23.733438 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-09-27 21:09:23.733449 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-09-27 21:09:23.733460 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-09-27 21:09:23.733471 | orchestrator | 2025-09-27 21:09:23.733482 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-09-27 21:09:23.733541 | orchestrator | Saturday 27 September 2025 21:07:37 +0000 (0:00:00.993) 0:00:02.290 **** 2025-09-27 21:09:23.733554 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-09-27 21:09:23.733565 | orchestrator | 2025-09-27 21:09:23.733576 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-09-27 21:09:23.733587 | orchestrator | Saturday 27 September 2025 21:07:38 +0000 (0:00:00.945) 0:00:03.235 **** 2025-09-27 21:09:23.733598 | orchestrator | ok: [testbed-manager] 2025-09-27 21:09:23.733608 | orchestrator | 2025-09-27 21:09:23.733619 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-09-27 21:09:23.733631 | orchestrator | Saturday 27 September 2025 21:07:38 +0000 (0:00:00.313) 0:00:03.549 **** 2025-09-27 21:09:23.733643 | orchestrator | changed: [testbed-manager] 2025-09-27 21:09:23.733656 | orchestrator | 2025-09-27 21:09:23.733668 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-09-27 21:09:23.733680 | orchestrator | Saturday 27 September 2025 21:07:39 +0000 (0:00:00.886) 0:00:04.435 **** 2025-09-27 21:09:23.733692 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-09-27 21:09:23.733706 | orchestrator | ok: [testbed-manager] 2025-09-27 21:09:23.733718 | orchestrator | 2025-09-27 21:09:23.733731 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-09-27 21:09:23.733742 | orchestrator | Saturday 27 September 2025 21:08:10 +0000 (0:00:30.875) 0:00:35.311 **** 2025-09-27 21:09:23.733754 | orchestrator | changed: [testbed-manager] 2025-09-27 21:09:23.733767 | orchestrator | 2025-09-27 21:09:23.733779 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-09-27 21:09:23.733792 | orchestrator | Saturday 27 September 2025 21:08:22 +0000 (0:00:12.046) 0:00:47.358 **** 2025-09-27 21:09:23.733804 | orchestrator | Pausing for 60 seconds 2025-09-27 21:09:23.733816 | orchestrator | changed: [testbed-manager] 2025-09-27 21:09:23.733829 | orchestrator | 2025-09-27 21:09:23.733842 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-09-27 21:09:23.733854 | orchestrator | Saturday 27 September 2025 21:09:22 +0000 (0:01:00.067) 0:01:47.425 **** 2025-09-27 21:09:23.733867 | orchestrator | ok: [testbed-manager] 2025-09-27 21:09:23.733879 | orchestrator | 2025-09-27 21:09:23.733891 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-09-27 21:09:23.733904 | orchestrator | Saturday 27 September 2025 21:09:22 +0000 (0:00:00.066) 0:01:47.492 **** 2025-09-27 21:09:23.733915 | orchestrator | changed: [testbed-manager] 2025-09-27 21:09:23.733927 | orchestrator | 2025-09-27 21:09:23.733939 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:09:23.733951 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:09:23.733964 | orchestrator | 2025-09-27 21:09:23.733976 | orchestrator | 2025-09-27 21:09:23.733997 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:09:23.734008 | orchestrator | Saturday 27 September 2025 21:09:23 +0000 (0:00:00.591) 0:01:48.083 **** 2025-09-27 21:09:23.734065 | orchestrator | =============================================================================== 2025-09-27 21:09:23.734077 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-09-27 21:09:23.734088 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 30.88s 2025-09-27 21:09:23.734099 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.05s 2025-09-27 21:09:23.734110 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.10s 2025-09-27 21:09:23.734121 | orchestrator | osism.services.squid : Create required directories ---------------------- 0.99s 2025-09-27 21:09:23.734131 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.95s 2025-09-27 21:09:23.734142 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.89s 2025-09-27 21:09:23.734153 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.59s 2025-09-27 21:09:23.734164 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.31s 2025-09-27 21:09:23.734174 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.07s 2025-09-27 21:09:23.734185 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-09-27 21:09:23.967884 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-27 21:09:23.967988 | orchestrator | ++ semver latest 9.0.0 2025-09-27 21:09:24.026595 | orchestrator | + [[ -1 -lt 0 ]] 2025-09-27 21:09:24.026703 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-27 21:09:24.027632 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-09-27 21:09:35.979002 | orchestrator | 2025-09-27 21:09:35 | INFO  | Task fd9fb64e-a40c-4415-8ce6-63cd791f656c (operator) was prepared for execution. 2025-09-27 21:09:35.979149 | orchestrator | 2025-09-27 21:09:35 | INFO  | It takes a moment until task fd9fb64e-a40c-4415-8ce6-63cd791f656c (operator) has been started and output is visible here. 2025-09-27 21:09:50.768089 | orchestrator | 2025-09-27 21:09:50.768198 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-09-27 21:09:50.768205 | orchestrator | 2025-09-27 21:09:50.768209 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-27 21:09:50.768214 | orchestrator | Saturday 27 September 2025 21:09:39 +0000 (0:00:00.109) 0:00:00.109 **** 2025-09-27 21:09:50.768218 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:09:50.768223 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:09:50.768227 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:09:50.768230 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:09:50.768234 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:09:50.768238 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:09:50.768242 | orchestrator | 2025-09-27 21:09:50.768245 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-09-27 21:09:50.768249 | orchestrator | Saturday 27 September 2025 21:09:42 +0000 (0:00:02.980) 0:00:03.090 **** 2025-09-27 21:09:50.768253 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:09:50.768257 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:09:50.768261 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:09:50.768265 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:09:50.768269 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:09:50.768272 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:09:50.768276 | orchestrator | 2025-09-27 21:09:50.768283 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-09-27 21:09:50.768287 | orchestrator | 2025-09-27 21:09:50.768291 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-27 21:09:50.768295 | orchestrator | Saturday 27 September 2025 21:09:43 +0000 (0:00:00.718) 0:00:03.808 **** 2025-09-27 21:09:50.768299 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:09:50.768303 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:09:50.768306 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:09:50.768327 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:09:50.768330 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:09:50.768334 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:09:50.768338 | orchestrator | 2025-09-27 21:09:50.768342 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-27 21:09:50.768345 | orchestrator | Saturday 27 September 2025 21:09:43 +0000 (0:00:00.144) 0:00:03.953 **** 2025-09-27 21:09:50.768349 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:09:50.768353 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:09:50.768356 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:09:50.768360 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:09:50.768364 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:09:50.768368 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:09:50.768371 | orchestrator | 2025-09-27 21:09:50.768375 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-27 21:09:50.768379 | orchestrator | Saturday 27 September 2025 21:09:43 +0000 (0:00:00.128) 0:00:04.082 **** 2025-09-27 21:09:50.768383 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:09:50.768387 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:09:50.768391 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:09:50.768413 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:09:50.768417 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:09:50.768421 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:09:50.768425 | orchestrator | 2025-09-27 21:09:50.768428 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-27 21:09:50.768432 | orchestrator | Saturday 27 September 2025 21:09:44 +0000 (0:00:00.614) 0:00:04.697 **** 2025-09-27 21:09:50.768436 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:09:50.768439 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:09:50.768443 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:09:50.768447 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:09:50.768450 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:09:50.768454 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:09:50.768458 | orchestrator | 2025-09-27 21:09:50.768461 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-27 21:09:50.768465 | orchestrator | Saturday 27 September 2025 21:09:45 +0000 (0:00:00.769) 0:00:05.466 **** 2025-09-27 21:09:50.768469 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-09-27 21:09:50.768473 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-09-27 21:09:50.768477 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-09-27 21:09:50.768480 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-09-27 21:09:50.768484 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-09-27 21:09:50.768488 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-09-27 21:09:50.768491 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-09-27 21:09:50.768495 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-09-27 21:09:50.768535 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-09-27 21:09:50.768539 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-09-27 21:09:50.768543 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-09-27 21:09:50.768546 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-09-27 21:09:50.768550 | orchestrator | 2025-09-27 21:09:50.768554 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-27 21:09:50.768557 | orchestrator | Saturday 27 September 2025 21:09:46 +0000 (0:00:01.167) 0:00:06.634 **** 2025-09-27 21:09:50.768561 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:09:50.768564 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:09:50.768568 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:09:50.768572 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:09:50.768575 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:09:50.768579 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:09:50.768583 | orchestrator | 2025-09-27 21:09:50.768587 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-27 21:09:50.768597 | orchestrator | Saturday 27 September 2025 21:09:47 +0000 (0:00:01.210) 0:00:07.844 **** 2025-09-27 21:09:50.768601 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-09-27 21:09:50.768605 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-09-27 21:09:50.768608 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-09-27 21:09:50.768612 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-09-27 21:09:50.768627 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-09-27 21:09:50.768631 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-09-27 21:09:50.768635 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-09-27 21:09:50.768638 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-09-27 21:09:50.768642 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-09-27 21:09:50.768646 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-09-27 21:09:50.768649 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-09-27 21:09:50.768653 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-09-27 21:09:50.768657 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-09-27 21:09:50.768660 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-09-27 21:09:50.768664 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-09-27 21:09:50.768668 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-09-27 21:09:50.768671 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-09-27 21:09:50.768675 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-09-27 21:09:50.768679 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-09-27 21:09:50.768682 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-09-27 21:09:50.768686 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-09-27 21:09:50.768690 | orchestrator | 2025-09-27 21:09:50.768693 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-27 21:09:50.768698 | orchestrator | Saturday 27 September 2025 21:09:48 +0000 (0:00:01.266) 0:00:09.111 **** 2025-09-27 21:09:50.768702 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:09:50.768705 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:09:50.768709 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:09:50.768713 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:09:50.768716 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:09:50.768720 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:09:50.768724 | orchestrator | 2025-09-27 21:09:50.768727 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-27 21:09:50.768731 | orchestrator | Saturday 27 September 2025 21:09:48 +0000 (0:00:00.151) 0:00:09.263 **** 2025-09-27 21:09:50.768735 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:09:50.768738 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:09:50.768742 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:09:50.768746 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:09:50.768750 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:09:50.768753 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:09:50.768757 | orchestrator | 2025-09-27 21:09:50.768761 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-27 21:09:50.768764 | orchestrator | Saturday 27 September 2025 21:09:49 +0000 (0:00:00.548) 0:00:09.811 **** 2025-09-27 21:09:50.768768 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:09:50.768772 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:09:50.768775 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:09:50.768779 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:09:50.768786 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:09:50.768790 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:09:50.768793 | orchestrator | 2025-09-27 21:09:50.768797 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-27 21:09:50.768801 | orchestrator | Saturday 27 September 2025 21:09:49 +0000 (0:00:00.166) 0:00:09.977 **** 2025-09-27 21:09:50.768804 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-27 21:09:50.768808 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-27 21:09:50.768812 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:09:50.768815 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:09:50.768819 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-27 21:09:50.768823 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:09:50.768827 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-27 21:09:50.768830 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:09:50.768834 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-27 21:09:50.768838 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:09:50.768841 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-27 21:09:50.768845 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:09:50.768849 | orchestrator | 2025-09-27 21:09:50.768852 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-27 21:09:50.768856 | orchestrator | Saturday 27 September 2025 21:09:50 +0000 (0:00:00.721) 0:00:10.699 **** 2025-09-27 21:09:50.768860 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:09:50.768863 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:09:50.768867 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:09:50.768871 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:09:50.768874 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:09:50.768878 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:09:50.768882 | orchestrator | 2025-09-27 21:09:50.768886 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-27 21:09:50.768889 | orchestrator | Saturday 27 September 2025 21:09:50 +0000 (0:00:00.135) 0:00:10.834 **** 2025-09-27 21:09:50.768893 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:09:50.768897 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:09:50.768900 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:09:50.768904 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:09:50.768908 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:09:50.768911 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:09:50.768915 | orchestrator | 2025-09-27 21:09:50.768919 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-27 21:09:50.768922 | orchestrator | Saturday 27 September 2025 21:09:50 +0000 (0:00:00.174) 0:00:11.008 **** 2025-09-27 21:09:50.768926 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:09:50.768930 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:09:50.768933 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:09:50.768937 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:09:50.768943 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:09:52.105839 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:09:52.105955 | orchestrator | 2025-09-27 21:09:52.105967 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-27 21:09:52.105977 | orchestrator | Saturday 27 September 2025 21:09:50 +0000 (0:00:00.149) 0:00:11.157 **** 2025-09-27 21:09:52.105986 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:09:52.105994 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:09:52.106002 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:09:52.106010 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:09:52.106073 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:09:52.106082 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:09:52.106091 | orchestrator | 2025-09-27 21:09:52.106100 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-27 21:09:52.106108 | orchestrator | Saturday 27 September 2025 21:09:51 +0000 (0:00:00.793) 0:00:11.950 **** 2025-09-27 21:09:52.106145 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:09:52.106154 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:09:52.106161 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:09:52.106169 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:09:52.106177 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:09:52.106185 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:09:52.106192 | orchestrator | 2025-09-27 21:09:52.106200 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:09:52.106210 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-27 21:09:52.106219 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-27 21:09:52.106227 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-27 21:09:52.106235 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-27 21:09:52.106243 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-27 21:09:52.106269 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-27 21:09:52.106277 | orchestrator | 2025-09-27 21:09:52.106285 | orchestrator | 2025-09-27 21:09:52.106296 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:09:52.106304 | orchestrator | Saturday 27 September 2025 21:09:51 +0000 (0:00:00.263) 0:00:12.214 **** 2025-09-27 21:09:52.106312 | orchestrator | =============================================================================== 2025-09-27 21:09:52.106320 | orchestrator | Gathering Facts --------------------------------------------------------- 2.98s 2025-09-27 21:09:52.106328 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.27s 2025-09-27 21:09:52.106337 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.21s 2025-09-27 21:09:52.106344 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.17s 2025-09-27 21:09:52.106352 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.79s 2025-09-27 21:09:52.106360 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.77s 2025-09-27 21:09:52.106367 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.72s 2025-09-27 21:09:52.106376 | orchestrator | Do not require tty for all users ---------------------------------------- 0.72s 2025-09-27 21:09:52.106385 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.61s 2025-09-27 21:09:52.106393 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.55s 2025-09-27 21:09:52.106402 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.26s 2025-09-27 21:09:52.106411 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.17s 2025-09-27 21:09:52.106420 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.17s 2025-09-27 21:09:52.106429 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.15s 2025-09-27 21:09:52.106438 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.15s 2025-09-27 21:09:52.106447 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.14s 2025-09-27 21:09:52.106455 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2025-09-27 21:09:52.106464 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.13s 2025-09-27 21:09:52.358541 | orchestrator | + osism apply --environment custom facts 2025-09-27 21:09:54.122861 | orchestrator | 2025-09-27 21:09:54 | INFO  | Trying to run play facts in environment custom 2025-09-27 21:10:04.248893 | orchestrator | 2025-09-27 21:10:04 | INFO  | Task 4e0837c6-941b-4074-a29d-9fcd248030d9 (facts) was prepared for execution. 2025-09-27 21:10:04.249011 | orchestrator | 2025-09-27 21:10:04 | INFO  | It takes a moment until task 4e0837c6-941b-4074-a29d-9fcd248030d9 (facts) has been started and output is visible here. 2025-09-27 21:10:47.459582 | orchestrator | 2025-09-27 21:10:47.459704 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-09-27 21:10:47.459714 | orchestrator | 2025-09-27 21:10:47.459721 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-27 21:10:47.459728 | orchestrator | Saturday 27 September 2025 21:10:07 +0000 (0:00:00.063) 0:00:00.063 **** 2025-09-27 21:10:47.459735 | orchestrator | ok: [testbed-manager] 2025-09-27 21:10:47.459743 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:10:47.459750 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:10:47.459757 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:10:47.459763 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:10:47.459769 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:10:47.459775 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:10:47.459781 | orchestrator | 2025-09-27 21:10:47.459787 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-09-27 21:10:47.459793 | orchestrator | Saturday 27 September 2025 21:10:08 +0000 (0:00:01.343) 0:00:01.407 **** 2025-09-27 21:10:47.459799 | orchestrator | ok: [testbed-manager] 2025-09-27 21:10:47.459805 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:10:47.459812 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:10:47.459818 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:10:47.459824 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:10:47.459830 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:10:47.459836 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:10:47.459842 | orchestrator | 2025-09-27 21:10:47.459848 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-09-27 21:10:47.459854 | orchestrator | 2025-09-27 21:10:47.459860 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-27 21:10:47.459866 | orchestrator | Saturday 27 September 2025 21:10:09 +0000 (0:00:01.098) 0:00:02.505 **** 2025-09-27 21:10:47.459872 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:10:47.459878 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:10:47.459884 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:10:47.459890 | orchestrator | 2025-09-27 21:10:47.459897 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-27 21:10:47.459904 | orchestrator | Saturday 27 September 2025 21:10:10 +0000 (0:00:00.093) 0:00:02.598 **** 2025-09-27 21:10:47.459910 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:10:47.459916 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:10:47.459922 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:10:47.459928 | orchestrator | 2025-09-27 21:10:47.459934 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-27 21:10:47.459940 | orchestrator | Saturday 27 September 2025 21:10:10 +0000 (0:00:00.162) 0:00:02.761 **** 2025-09-27 21:10:47.459946 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:10:47.459953 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:10:47.459959 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:10:47.459965 | orchestrator | 2025-09-27 21:10:47.459972 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-27 21:10:47.459994 | orchestrator | Saturday 27 September 2025 21:10:10 +0000 (0:00:00.164) 0:00:02.925 **** 2025-09-27 21:10:47.460001 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:10:47.460009 | orchestrator | 2025-09-27 21:10:47.460015 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-27 21:10:47.460041 | orchestrator | Saturday 27 September 2025 21:10:10 +0000 (0:00:00.120) 0:00:03.046 **** 2025-09-27 21:10:47.460048 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:10:47.460054 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:10:47.460060 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:10:47.460066 | orchestrator | 2025-09-27 21:10:47.460072 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-27 21:10:47.460078 | orchestrator | Saturday 27 September 2025 21:10:10 +0000 (0:00:00.452) 0:00:03.498 **** 2025-09-27 21:10:47.460084 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:10:47.460090 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:10:47.460097 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:10:47.460104 | orchestrator | 2025-09-27 21:10:47.460111 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-27 21:10:47.460117 | orchestrator | Saturday 27 September 2025 21:10:11 +0000 (0:00:00.091) 0:00:03.590 **** 2025-09-27 21:10:47.460124 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:10:47.460131 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:10:47.460138 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:10:47.460144 | orchestrator | 2025-09-27 21:10:47.460151 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-27 21:10:47.460158 | orchestrator | Saturday 27 September 2025 21:10:12 +0000 (0:00:01.052) 0:00:04.642 **** 2025-09-27 21:10:47.460164 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:10:47.460171 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:10:47.460177 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:10:47.460184 | orchestrator | 2025-09-27 21:10:47.460191 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-27 21:10:47.460197 | orchestrator | Saturday 27 September 2025 21:10:12 +0000 (0:00:00.427) 0:00:05.070 **** 2025-09-27 21:10:47.460204 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:10:47.460211 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:10:47.460217 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:10:47.460224 | orchestrator | 2025-09-27 21:10:47.460231 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-27 21:10:47.460237 | orchestrator | Saturday 27 September 2025 21:10:13 +0000 (0:00:01.016) 0:00:06.086 **** 2025-09-27 21:10:47.460244 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:10:47.460251 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:10:47.460258 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:10:47.460265 | orchestrator | 2025-09-27 21:10:47.460271 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-09-27 21:10:47.460278 | orchestrator | Saturday 27 September 2025 21:10:31 +0000 (0:00:17.574) 0:00:23.661 **** 2025-09-27 21:10:47.460285 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:10:47.460292 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:10:47.460298 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:10:47.460305 | orchestrator | 2025-09-27 21:10:47.460312 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-09-27 21:10:47.460332 | orchestrator | Saturday 27 September 2025 21:10:31 +0000 (0:00:00.092) 0:00:23.754 **** 2025-09-27 21:10:47.460340 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:10:47.460347 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:10:47.460354 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:10:47.460360 | orchestrator | 2025-09-27 21:10:47.460367 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-27 21:10:47.460374 | orchestrator | Saturday 27 September 2025 21:10:38 +0000 (0:00:07.505) 0:00:31.259 **** 2025-09-27 21:10:47.460381 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:10:47.460387 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:10:47.460394 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:10:47.460401 | orchestrator | 2025-09-27 21:10:47.460408 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-27 21:10:47.460415 | orchestrator | Saturday 27 September 2025 21:10:39 +0000 (0:00:00.450) 0:00:31.710 **** 2025-09-27 21:10:47.460427 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-09-27 21:10:47.460434 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-09-27 21:10:47.460441 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-09-27 21:10:47.460447 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-09-27 21:10:47.460453 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-09-27 21:10:47.460459 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-09-27 21:10:47.460465 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-09-27 21:10:47.460471 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-09-27 21:10:47.460477 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-09-27 21:10:47.460483 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-09-27 21:10:47.460489 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-09-27 21:10:47.460495 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-09-27 21:10:47.460501 | orchestrator | 2025-09-27 21:10:47.460507 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-27 21:10:47.460526 | orchestrator | Saturday 27 September 2025 21:10:42 +0000 (0:00:03.165) 0:00:34.876 **** 2025-09-27 21:10:47.460533 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:10:47.460539 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:10:47.460545 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:10:47.460551 | orchestrator | 2025-09-27 21:10:47.460557 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-27 21:10:47.460563 | orchestrator | 2025-09-27 21:10:47.460569 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-27 21:10:47.460576 | orchestrator | Saturday 27 September 2025 21:10:43 +0000 (0:00:01.356) 0:00:36.233 **** 2025-09-27 21:10:47.460582 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:10:47.460588 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:10:47.460594 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:10:47.460600 | orchestrator | ok: [testbed-manager] 2025-09-27 21:10:47.460606 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:10:47.460612 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:10:47.460618 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:10:47.460624 | orchestrator | 2025-09-27 21:10:47.460630 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:10:47.460637 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:10:47.460644 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:10:47.460651 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:10:47.460657 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:10:47.460701 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:10:47.460709 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:10:47.460715 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:10:47.460722 | orchestrator | 2025-09-27 21:10:47.460728 | orchestrator | 2025-09-27 21:10:47.460734 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:10:47.460745 | orchestrator | Saturday 27 September 2025 21:10:47 +0000 (0:00:03.710) 0:00:39.943 **** 2025-09-27 21:10:47.460751 | orchestrator | =============================================================================== 2025-09-27 21:10:47.460757 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.57s 2025-09-27 21:10:47.460764 | orchestrator | Install required packages (Debian) -------------------------------------- 7.51s 2025-09-27 21:10:47.460769 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.71s 2025-09-27 21:10:47.460776 | orchestrator | Copy fact files --------------------------------------------------------- 3.17s 2025-09-27 21:10:47.460782 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.36s 2025-09-27 21:10:47.460787 | orchestrator | Create custom facts directory ------------------------------------------- 1.34s 2025-09-27 21:10:47.460797 | orchestrator | Copy fact file ---------------------------------------------------------- 1.10s 2025-09-27 21:10:47.651034 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.05s 2025-09-27 21:10:47.651123 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.02s 2025-09-27 21:10:47.651133 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.45s 2025-09-27 21:10:47.651142 | orchestrator | Create custom facts directory ------------------------------------------- 0.45s 2025-09-27 21:10:47.651150 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.43s 2025-09-27 21:10:47.651159 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.16s 2025-09-27 21:10:47.651167 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.16s 2025-09-27 21:10:47.651176 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.12s 2025-09-27 21:10:47.651186 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.09s 2025-09-27 21:10:47.651194 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.09s 2025-09-27 21:10:47.651203 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.09s 2025-09-27 21:10:47.896418 | orchestrator | + osism apply bootstrap 2025-09-27 21:10:59.814503 | orchestrator | 2025-09-27 21:10:59 | INFO  | Task c5b707f5-f16b-4956-b3d0-3719190094ef (bootstrap) was prepared for execution. 2025-09-27 21:10:59.814687 | orchestrator | 2025-09-27 21:10:59 | INFO  | It takes a moment until task c5b707f5-f16b-4956-b3d0-3719190094ef (bootstrap) has been started and output is visible here. 2025-09-27 21:11:16.895612 | orchestrator | 2025-09-27 21:11:16.895738 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-09-27 21:11:16.895755 | orchestrator | 2025-09-27 21:11:16.895767 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-09-27 21:11:16.895778 | orchestrator | Saturday 27 September 2025 21:11:03 +0000 (0:00:00.121) 0:00:00.121 **** 2025-09-27 21:11:16.895790 | orchestrator | ok: [testbed-manager] 2025-09-27 21:11:16.895802 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:11:16.895813 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:11:16.895825 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:11:16.895835 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:11:16.895846 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:11:16.895857 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:11:16.895867 | orchestrator | 2025-09-27 21:11:16.895879 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-27 21:11:16.895889 | orchestrator | 2025-09-27 21:11:16.895916 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-27 21:11:16.895928 | orchestrator | Saturday 27 September 2025 21:11:03 +0000 (0:00:00.157) 0:00:00.278 **** 2025-09-27 21:11:16.895938 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:11:16.895949 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:11:16.895960 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:11:16.895970 | orchestrator | ok: [testbed-manager] 2025-09-27 21:11:16.896004 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:11:16.896015 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:11:16.896026 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:11:16.896036 | orchestrator | 2025-09-27 21:11:16.896047 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-09-27 21:11:16.896058 | orchestrator | 2025-09-27 21:11:16.896068 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-27 21:11:16.896079 | orchestrator | Saturday 27 September 2025 21:11:08 +0000 (0:00:04.483) 0:00:04.762 **** 2025-09-27 21:11:16.896090 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-27 21:11:16.896101 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-27 21:11:16.896112 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-27 21:11:16.896122 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-09-27 21:11:16.896133 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-27 21:11:16.896144 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-27 21:11:16.896154 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-27 21:11:16.896165 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-27 21:11:16.896175 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-27 21:11:16.896186 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-27 21:11:16.896196 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-27 21:11:16.896208 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-09-27 21:11:16.896219 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-27 21:11:16.896230 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-27 21:11:16.896240 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-09-27 21:11:16.896251 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:11:16.896262 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-09-27 21:11:16.896272 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-09-27 21:11:16.896283 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-09-27 21:11:16.896294 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-09-27 21:11:16.896304 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-27 21:11:16.896315 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-27 21:11:16.896325 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:11:16.896336 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-09-27 21:11:16.896346 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-27 21:11:16.896357 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-09-27 21:11:16.896367 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-09-27 21:11:16.896378 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-09-27 21:11:16.896388 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-27 21:11:16.896399 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-09-27 21:11:16.896410 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-09-27 21:11:16.896420 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-09-27 21:11:16.896430 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-27 21:11:16.896441 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-27 21:11:16.896451 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-09-27 21:11:16.896462 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-09-27 21:11:16.896472 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-27 21:11:16.896482 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-27 21:11:16.896500 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-27 21:11:16.896511 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-27 21:11:16.896539 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-27 21:11:16.896551 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:11:16.896562 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-09-27 21:11:16.896572 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-27 21:11:16.896583 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-27 21:11:16.896594 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-27 21:11:16.896623 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:11:16.896635 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-27 21:11:16.896645 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:11:16.896656 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-09-27 21:11:16.896667 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-27 21:11:16.896677 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:11:16.896688 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-27 21:11:16.896698 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-27 21:11:16.896708 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-27 21:11:16.896719 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:11:16.896729 | orchestrator | 2025-09-27 21:11:16.896740 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-09-27 21:11:16.896751 | orchestrator | 2025-09-27 21:11:16.896761 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-09-27 21:11:16.896772 | orchestrator | Saturday 27 September 2025 21:11:08 +0000 (0:00:00.392) 0:00:05.154 **** 2025-09-27 21:11:16.896783 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:11:16.896793 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:11:16.896804 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:11:16.896815 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:11:16.896825 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:11:16.896836 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:11:16.896846 | orchestrator | ok: [testbed-manager] 2025-09-27 21:11:16.896857 | orchestrator | 2025-09-27 21:11:16.896868 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-09-27 21:11:16.896878 | orchestrator | Saturday 27 September 2025 21:11:10 +0000 (0:00:02.000) 0:00:07.154 **** 2025-09-27 21:11:16.896889 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:11:16.896899 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:11:16.896909 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:11:16.896920 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:11:16.896930 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:11:16.896940 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:11:16.896951 | orchestrator | ok: [testbed-manager] 2025-09-27 21:11:16.896961 | orchestrator | 2025-09-27 21:11:16.896972 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-09-27 21:11:16.896983 | orchestrator | Saturday 27 September 2025 21:11:12 +0000 (0:00:01.841) 0:00:08.995 **** 2025-09-27 21:11:16.896995 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:11:16.897008 | orchestrator | 2025-09-27 21:11:16.897019 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-09-27 21:11:16.897030 | orchestrator | Saturday 27 September 2025 21:11:12 +0000 (0:00:00.215) 0:00:09.211 **** 2025-09-27 21:11:16.897040 | orchestrator | changed: [testbed-manager] 2025-09-27 21:11:16.897051 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:11:16.897061 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:11:16.897071 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:11:16.897082 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:11:16.897099 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:11:16.897110 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:11:16.897120 | orchestrator | 2025-09-27 21:11:16.897131 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-09-27 21:11:16.897142 | orchestrator | Saturday 27 September 2025 21:11:14 +0000 (0:00:01.924) 0:00:11.135 **** 2025-09-27 21:11:16.897152 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:11:16.897164 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:11:16.897176 | orchestrator | 2025-09-27 21:11:16.897187 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-09-27 21:11:16.897198 | orchestrator | Saturday 27 September 2025 21:11:14 +0000 (0:00:00.236) 0:00:11.372 **** 2025-09-27 21:11:16.897208 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:11:16.897219 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:11:16.897229 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:11:16.897240 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:11:16.897250 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:11:16.897260 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:11:16.897271 | orchestrator | 2025-09-27 21:11:16.897282 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-09-27 21:11:16.897292 | orchestrator | Saturday 27 September 2025 21:11:15 +0000 (0:00:01.051) 0:00:12.424 **** 2025-09-27 21:11:16.897303 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:11:16.897313 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:11:16.897324 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:11:16.897334 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:11:16.897344 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:11:16.897355 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:11:16.897365 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:11:16.897376 | orchestrator | 2025-09-27 21:11:16.897387 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-09-27 21:11:16.897397 | orchestrator | Saturday 27 September 2025 21:11:16 +0000 (0:00:00.547) 0:00:12.971 **** 2025-09-27 21:11:16.897408 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:11:16.897418 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:11:16.897429 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:11:16.897439 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:11:16.897450 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:11:16.897460 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:11:16.897471 | orchestrator | ok: [testbed-manager] 2025-09-27 21:11:16.897481 | orchestrator | 2025-09-27 21:11:16.897500 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-27 21:11:16.897513 | orchestrator | Saturday 27 September 2025 21:11:16 +0000 (0:00:00.403) 0:00:13.374 **** 2025-09-27 21:11:16.897543 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:11:16.897554 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:11:16.897572 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:11:29.290911 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:11:29.291022 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:11:29.291036 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:11:29.291046 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:11:29.291056 | orchestrator | 2025-09-27 21:11:29.291067 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-27 21:11:29.291078 | orchestrator | Saturday 27 September 2025 21:11:17 +0000 (0:00:00.226) 0:00:13.601 **** 2025-09-27 21:11:29.291090 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:11:29.291117 | orchestrator | 2025-09-27 21:11:29.291158 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-27 21:11:29.291170 | orchestrator | Saturday 27 September 2025 21:11:17 +0000 (0:00:00.287) 0:00:13.888 **** 2025-09-27 21:11:29.291180 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:11:29.291190 | orchestrator | 2025-09-27 21:11:29.291200 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-27 21:11:29.291210 | orchestrator | Saturday 27 September 2025 21:11:17 +0000 (0:00:00.294) 0:00:14.183 **** 2025-09-27 21:11:29.291220 | orchestrator | ok: [testbed-manager] 2025-09-27 21:11:29.291231 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:11:29.291240 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:11:29.291250 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:11:29.291260 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:11:29.291269 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:11:29.291278 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:11:29.291288 | orchestrator | 2025-09-27 21:11:29.291297 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-27 21:11:29.291307 | orchestrator | Saturday 27 September 2025 21:11:18 +0000 (0:00:01.353) 0:00:15.536 **** 2025-09-27 21:11:29.291317 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:11:29.291326 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:11:29.291336 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:11:29.291345 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:11:29.291354 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:11:29.291364 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:11:29.291373 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:11:29.291382 | orchestrator | 2025-09-27 21:11:29.291392 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-27 21:11:29.291402 | orchestrator | Saturday 27 September 2025 21:11:19 +0000 (0:00:00.187) 0:00:15.724 **** 2025-09-27 21:11:29.291411 | orchestrator | ok: [testbed-manager] 2025-09-27 21:11:29.291421 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:11:29.291430 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:11:29.291439 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:11:29.291449 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:11:29.291458 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:11:29.291468 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:11:29.291477 | orchestrator | 2025-09-27 21:11:29.291487 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-27 21:11:29.291496 | orchestrator | Saturday 27 September 2025 21:11:19 +0000 (0:00:00.572) 0:00:16.297 **** 2025-09-27 21:11:29.291506 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:11:29.291515 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:11:29.291555 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:11:29.291566 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:11:29.291576 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:11:29.291586 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:11:29.291595 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:11:29.291605 | orchestrator | 2025-09-27 21:11:29.291615 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-27 21:11:29.291625 | orchestrator | Saturday 27 September 2025 21:11:19 +0000 (0:00:00.228) 0:00:16.525 **** 2025-09-27 21:11:29.291635 | orchestrator | ok: [testbed-manager] 2025-09-27 21:11:29.291644 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:11:29.291654 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:11:29.291663 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:11:29.291673 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:11:29.291682 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:11:29.291691 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:11:29.291701 | orchestrator | 2025-09-27 21:11:29.291711 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-27 21:11:29.291727 | orchestrator | Saturday 27 September 2025 21:11:20 +0000 (0:00:00.644) 0:00:17.170 **** 2025-09-27 21:11:29.291737 | orchestrator | ok: [testbed-manager] 2025-09-27 21:11:29.291747 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:11:29.291756 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:11:29.291765 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:11:29.291775 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:11:29.291784 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:11:29.291794 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:11:29.291803 | orchestrator | 2025-09-27 21:11:29.291813 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-27 21:11:29.291822 | orchestrator | Saturday 27 September 2025 21:11:21 +0000 (0:00:01.216) 0:00:18.386 **** 2025-09-27 21:11:29.291832 | orchestrator | ok: [testbed-manager] 2025-09-27 21:11:29.291841 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:11:29.291851 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:11:29.291860 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:11:29.291870 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:11:29.291879 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:11:29.291888 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:11:29.291898 | orchestrator | 2025-09-27 21:11:29.291908 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-27 21:11:29.291918 | orchestrator | Saturday 27 September 2025 21:11:23 +0000 (0:00:01.245) 0:00:19.631 **** 2025-09-27 21:11:29.291944 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:11:29.291955 | orchestrator | 2025-09-27 21:11:29.291964 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-27 21:11:29.291974 | orchestrator | Saturday 27 September 2025 21:11:23 +0000 (0:00:00.407) 0:00:20.039 **** 2025-09-27 21:11:29.291983 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:11:29.291993 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:11:29.292002 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:11:29.292012 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:11:29.292021 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:11:29.292030 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:11:29.292045 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:11:29.292055 | orchestrator | 2025-09-27 21:11:29.292064 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-27 21:11:29.292074 | orchestrator | Saturday 27 September 2025 21:11:24 +0000 (0:00:01.291) 0:00:21.331 **** 2025-09-27 21:11:29.292084 | orchestrator | ok: [testbed-manager] 2025-09-27 21:11:29.292093 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:11:29.292103 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:11:29.292112 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:11:29.292122 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:11:29.292131 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:11:29.292140 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:11:29.292150 | orchestrator | 2025-09-27 21:11:29.292159 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-27 21:11:29.292169 | orchestrator | Saturday 27 September 2025 21:11:24 +0000 (0:00:00.241) 0:00:21.572 **** 2025-09-27 21:11:29.292178 | orchestrator | ok: [testbed-manager] 2025-09-27 21:11:29.292188 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:11:29.292197 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:11:29.292206 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:11:29.292216 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:11:29.292225 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:11:29.292234 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:11:29.292243 | orchestrator | 2025-09-27 21:11:29.292253 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-27 21:11:29.292262 | orchestrator | Saturday 27 September 2025 21:11:25 +0000 (0:00:00.231) 0:00:21.804 **** 2025-09-27 21:11:29.292278 | orchestrator | ok: [testbed-manager] 2025-09-27 21:11:29.292287 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:11:29.292297 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:11:29.292306 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:11:29.292315 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:11:29.292325 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:11:29.292334 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:11:29.292343 | orchestrator | 2025-09-27 21:11:29.292353 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-27 21:11:29.292363 | orchestrator | Saturday 27 September 2025 21:11:25 +0000 (0:00:00.233) 0:00:22.037 **** 2025-09-27 21:11:29.292373 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:11:29.292385 | orchestrator | 2025-09-27 21:11:29.292394 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-27 21:11:29.292404 | orchestrator | Saturday 27 September 2025 21:11:25 +0000 (0:00:00.252) 0:00:22.289 **** 2025-09-27 21:11:29.292413 | orchestrator | ok: [testbed-manager] 2025-09-27 21:11:29.292423 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:11:29.292433 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:11:29.292442 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:11:29.292452 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:11:29.292461 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:11:29.292470 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:11:29.292480 | orchestrator | 2025-09-27 21:11:29.292489 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-27 21:11:29.292499 | orchestrator | Saturday 27 September 2025 21:11:26 +0000 (0:00:00.565) 0:00:22.855 **** 2025-09-27 21:11:29.292508 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:11:29.292518 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:11:29.292543 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:11:29.292553 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:11:29.292563 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:11:29.292572 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:11:29.292582 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:11:29.292591 | orchestrator | 2025-09-27 21:11:29.292601 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-27 21:11:29.292610 | orchestrator | Saturday 27 September 2025 21:11:26 +0000 (0:00:00.228) 0:00:23.084 **** 2025-09-27 21:11:29.292620 | orchestrator | ok: [testbed-manager] 2025-09-27 21:11:29.292629 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:11:29.292639 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:11:29.292648 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:11:29.292658 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:11:29.292667 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:11:29.292676 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:11:29.292686 | orchestrator | 2025-09-27 21:11:29.292695 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-27 21:11:29.292705 | orchestrator | Saturday 27 September 2025 21:11:27 +0000 (0:00:01.054) 0:00:24.139 **** 2025-09-27 21:11:29.292714 | orchestrator | ok: [testbed-manager] 2025-09-27 21:11:29.292723 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:11:29.292733 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:11:29.292742 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:11:29.292752 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:11:29.292761 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:11:29.292770 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:11:29.292780 | orchestrator | 2025-09-27 21:11:29.292789 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-27 21:11:29.292799 | orchestrator | Saturday 27 September 2025 21:11:28 +0000 (0:00:00.557) 0:00:24.697 **** 2025-09-27 21:11:29.292809 | orchestrator | ok: [testbed-manager] 2025-09-27 21:11:29.292824 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:11:29.292834 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:11:29.292843 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:11:29.292858 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:12:08.787380 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:12:08.787531 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:12:08.787605 | orchestrator | 2025-09-27 21:12:08.787620 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-27 21:12:08.787634 | orchestrator | Saturday 27 September 2025 21:11:29 +0000 (0:00:01.175) 0:00:25.872 **** 2025-09-27 21:12:08.787646 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:12:08.787658 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:12:08.787670 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:12:08.787681 | orchestrator | changed: [testbed-manager] 2025-09-27 21:12:08.787692 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:12:08.787703 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:12:08.787714 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:12:08.787725 | orchestrator | 2025-09-27 21:12:08.787737 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-09-27 21:12:08.787748 | orchestrator | Saturday 27 September 2025 21:11:47 +0000 (0:00:18.077) 0:00:43.949 **** 2025-09-27 21:12:08.787759 | orchestrator | ok: [testbed-manager] 2025-09-27 21:12:08.787770 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:12:08.787781 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:12:08.787792 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:12:08.787802 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:12:08.787813 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:12:08.787824 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:12:08.787835 | orchestrator | 2025-09-27 21:12:08.787846 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-09-27 21:12:08.787857 | orchestrator | Saturday 27 September 2025 21:11:47 +0000 (0:00:00.223) 0:00:44.173 **** 2025-09-27 21:12:08.787868 | orchestrator | ok: [testbed-manager] 2025-09-27 21:12:08.787879 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:12:08.787891 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:12:08.787903 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:12:08.787915 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:12:08.787927 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:12:08.787939 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:12:08.787951 | orchestrator | 2025-09-27 21:12:08.787963 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-09-27 21:12:08.787976 | orchestrator | Saturday 27 September 2025 21:11:47 +0000 (0:00:00.201) 0:00:44.375 **** 2025-09-27 21:12:08.787988 | orchestrator | ok: [testbed-manager] 2025-09-27 21:12:08.788000 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:12:08.788013 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:12:08.788025 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:12:08.788037 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:12:08.788050 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:12:08.788062 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:12:08.788075 | orchestrator | 2025-09-27 21:12:08.788087 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-09-27 21:12:08.788100 | orchestrator | Saturday 27 September 2025 21:11:47 +0000 (0:00:00.208) 0:00:44.583 **** 2025-09-27 21:12:08.788114 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:12:08.788130 | orchestrator | 2025-09-27 21:12:08.788142 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-09-27 21:12:08.788154 | orchestrator | Saturday 27 September 2025 21:11:48 +0000 (0:00:00.263) 0:00:44.847 **** 2025-09-27 21:12:08.788166 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:12:08.788178 | orchestrator | ok: [testbed-manager] 2025-09-27 21:12:08.788190 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:12:08.788233 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:12:08.788246 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:12:08.788258 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:12:08.788269 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:12:08.788280 | orchestrator | 2025-09-27 21:12:08.788291 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-09-27 21:12:08.788301 | orchestrator | Saturday 27 September 2025 21:11:49 +0000 (0:00:01.418) 0:00:46.265 **** 2025-09-27 21:12:08.788312 | orchestrator | changed: [testbed-manager] 2025-09-27 21:12:08.788323 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:12:08.788333 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:12:08.788344 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:12:08.788355 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:12:08.788365 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:12:08.788376 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:12:08.788386 | orchestrator | 2025-09-27 21:12:08.788397 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-09-27 21:12:08.788408 | orchestrator | Saturday 27 September 2025 21:11:50 +0000 (0:00:01.019) 0:00:47.284 **** 2025-09-27 21:12:08.788419 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:12:08.788429 | orchestrator | ok: [testbed-manager] 2025-09-27 21:12:08.788440 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:12:08.788450 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:12:08.788461 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:12:08.788472 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:12:08.788504 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:12:08.788515 | orchestrator | 2025-09-27 21:12:08.788526 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-09-27 21:12:08.788537 | orchestrator | Saturday 27 September 2025 21:11:51 +0000 (0:00:00.757) 0:00:48.041 **** 2025-09-27 21:12:08.788583 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:12:08.788607 | orchestrator | 2025-09-27 21:12:08.788627 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-09-27 21:12:08.788639 | orchestrator | Saturday 27 September 2025 21:11:51 +0000 (0:00:00.264) 0:00:48.306 **** 2025-09-27 21:12:08.788650 | orchestrator | changed: [testbed-manager] 2025-09-27 21:12:08.788661 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:12:08.788671 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:12:08.788682 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:12:08.788693 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:12:08.788703 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:12:08.788714 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:12:08.788725 | orchestrator | 2025-09-27 21:12:08.788755 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-09-27 21:12:08.788767 | orchestrator | Saturday 27 September 2025 21:11:52 +0000 (0:00:01.012) 0:00:49.319 **** 2025-09-27 21:12:08.788778 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:12:08.788788 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:12:08.788799 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:12:08.788810 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:12:08.788820 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:12:08.788831 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:12:08.788841 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:12:08.788852 | orchestrator | 2025-09-27 21:12:08.788862 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-09-27 21:12:08.788880 | orchestrator | Saturday 27 September 2025 21:11:52 +0000 (0:00:00.278) 0:00:49.597 **** 2025-09-27 21:12:08.788891 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:12:08.788901 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:12:08.788912 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:12:08.788922 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:12:08.788942 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:12:08.788953 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:12:08.788963 | orchestrator | changed: [testbed-manager] 2025-09-27 21:12:08.788974 | orchestrator | 2025-09-27 21:12:08.788984 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-09-27 21:12:08.788995 | orchestrator | Saturday 27 September 2025 21:12:03 +0000 (0:00:10.611) 0:01:00.209 **** 2025-09-27 21:12:08.789006 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:12:08.789017 | orchestrator | ok: [testbed-manager] 2025-09-27 21:12:08.789027 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:12:08.789038 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:12:08.789048 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:12:08.789059 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:12:08.789069 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:12:08.789079 | orchestrator | 2025-09-27 21:12:08.789090 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-09-27 21:12:08.789101 | orchestrator | Saturday 27 September 2025 21:12:04 +0000 (0:00:01.041) 0:01:01.250 **** 2025-09-27 21:12:08.789112 | orchestrator | ok: [testbed-manager] 2025-09-27 21:12:08.789123 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:12:08.789133 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:12:08.789144 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:12:08.789154 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:12:08.789165 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:12:08.789176 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:12:08.789186 | orchestrator | 2025-09-27 21:12:08.789197 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-09-27 21:12:08.789207 | orchestrator | Saturday 27 September 2025 21:12:05 +0000 (0:00:00.894) 0:01:02.144 **** 2025-09-27 21:12:08.789218 | orchestrator | ok: [testbed-manager] 2025-09-27 21:12:08.789229 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:12:08.789239 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:12:08.789250 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:12:08.789261 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:12:08.789271 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:12:08.789281 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:12:08.789292 | orchestrator | 2025-09-27 21:12:08.789303 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-09-27 21:12:08.789314 | orchestrator | Saturday 27 September 2025 21:12:05 +0000 (0:00:00.208) 0:01:02.353 **** 2025-09-27 21:12:08.789325 | orchestrator | ok: [testbed-manager] 2025-09-27 21:12:08.789335 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:12:08.789346 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:12:08.789357 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:12:08.789367 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:12:08.789378 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:12:08.789388 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:12:08.789399 | orchestrator | 2025-09-27 21:12:08.789409 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-09-27 21:12:08.789420 | orchestrator | Saturday 27 September 2025 21:12:05 +0000 (0:00:00.195) 0:01:02.549 **** 2025-09-27 21:12:08.789431 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:12:08.789442 | orchestrator | 2025-09-27 21:12:08.789453 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-09-27 21:12:08.789463 | orchestrator | Saturday 27 September 2025 21:12:06 +0000 (0:00:00.253) 0:01:02.802 **** 2025-09-27 21:12:08.789474 | orchestrator | ok: [testbed-manager] 2025-09-27 21:12:08.789484 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:12:08.789495 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:12:08.789505 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:12:08.789516 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:12:08.789526 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:12:08.789537 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:12:08.789601 | orchestrator | 2025-09-27 21:12:08.789622 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-09-27 21:12:08.789642 | orchestrator | Saturday 27 September 2025 21:12:07 +0000 (0:00:01.760) 0:01:04.562 **** 2025-09-27 21:12:08.789660 | orchestrator | changed: [testbed-manager] 2025-09-27 21:12:08.789676 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:12:08.789696 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:12:08.789712 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:12:08.789728 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:12:08.789744 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:12:08.789761 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:12:08.789779 | orchestrator | 2025-09-27 21:12:08.789794 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-09-27 21:12:08.789810 | orchestrator | Saturday 27 September 2025 21:12:08 +0000 (0:00:00.584) 0:01:05.147 **** 2025-09-27 21:12:08.789828 | orchestrator | ok: [testbed-manager] 2025-09-27 21:12:08.789845 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:12:08.789861 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:12:08.789876 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:12:08.789891 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:12:08.789907 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:12:08.789922 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:12:08.789937 | orchestrator | 2025-09-27 21:12:08.789963 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-09-27 21:14:32.979327 | orchestrator | Saturday 27 September 2025 21:12:08 +0000 (0:00:00.226) 0:01:05.373 **** 2025-09-27 21:14:32.979448 | orchestrator | ok: [testbed-manager] 2025-09-27 21:14:32.979464 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:14:32.979476 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:14:32.979486 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:14:32.979497 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:14:32.979507 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:14:32.979523 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:14:32.979542 | orchestrator | 2025-09-27 21:14:32.979563 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-09-27 21:14:32.979584 | orchestrator | Saturday 27 September 2025 21:12:09 +0000 (0:00:01.206) 0:01:06.579 **** 2025-09-27 21:14:32.979640 | orchestrator | changed: [testbed-manager] 2025-09-27 21:14:32.979659 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:14:32.979687 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:14:32.979698 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:14:32.979709 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:14:32.979720 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:14:32.979731 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:14:32.979742 | orchestrator | 2025-09-27 21:14:32.979754 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-09-27 21:14:32.979765 | orchestrator | Saturday 27 September 2025 21:12:11 +0000 (0:00:01.696) 0:01:08.276 **** 2025-09-27 21:14:32.979776 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:14:32.979787 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:14:32.979797 | orchestrator | ok: [testbed-manager] 2025-09-27 21:14:32.979808 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:14:32.979819 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:14:32.979830 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:14:32.979840 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:14:32.979851 | orchestrator | 2025-09-27 21:14:32.979863 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-09-27 21:14:32.979875 | orchestrator | Saturday 27 September 2025 21:12:13 +0000 (0:00:02.018) 0:01:10.295 **** 2025-09-27 21:14:32.979887 | orchestrator | ok: [testbed-manager] 2025-09-27 21:14:32.979899 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:14:32.979910 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:14:32.979922 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:14:32.979934 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:14:32.979946 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:14:32.979981 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:14:32.979993 | orchestrator | 2025-09-27 21:14:32.980005 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-09-27 21:14:32.980018 | orchestrator | Saturday 27 September 2025 21:13:02 +0000 (0:00:48.423) 0:01:58.719 **** 2025-09-27 21:14:32.980030 | orchestrator | changed: [testbed-manager] 2025-09-27 21:14:32.980041 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:14:32.980053 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:14:32.980065 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:14:32.980077 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:14:32.980088 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:14:32.980100 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:14:32.980112 | orchestrator | 2025-09-27 21:14:32.980124 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-09-27 21:14:32.980136 | orchestrator | Saturday 27 September 2025 21:14:19 +0000 (0:01:17.390) 0:03:16.109 **** 2025-09-27 21:14:32.980147 | orchestrator | ok: [testbed-manager] 2025-09-27 21:14:32.980159 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:14:32.980171 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:14:32.980183 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:14:32.980195 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:14:32.980207 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:14:32.980219 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:14:32.980229 | orchestrator | 2025-09-27 21:14:32.980240 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-09-27 21:14:32.980252 | orchestrator | Saturday 27 September 2025 21:14:21 +0000 (0:00:01.876) 0:03:17.985 **** 2025-09-27 21:14:32.980263 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:14:32.980273 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:14:32.980284 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:14:32.980294 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:14:32.980304 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:14:32.980315 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:14:32.980325 | orchestrator | changed: [testbed-manager] 2025-09-27 21:14:32.980336 | orchestrator | 2025-09-27 21:14:32.980347 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-09-27 21:14:32.980357 | orchestrator | Saturday 27 September 2025 21:14:31 +0000 (0:00:10.309) 0:03:28.294 **** 2025-09-27 21:14:32.980377 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-09-27 21:14:32.980394 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-09-27 21:14:32.980434 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-09-27 21:14:32.980455 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-09-27 21:14:32.980474 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-09-27 21:14:32.980486 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-09-27 21:14:32.980497 | orchestrator | 2025-09-27 21:14:32.980508 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-09-27 21:14:32.980518 | orchestrator | Saturday 27 September 2025 21:14:32 +0000 (0:00:00.385) 0:03:28.680 **** 2025-09-27 21:14:32.980529 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-27 21:14:32.980540 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:14:32.980550 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-27 21:14:32.980561 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:14:32.980571 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-27 21:14:32.980582 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:14:32.980614 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-27 21:14:32.980625 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:14:32.980636 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-27 21:14:32.980647 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-27 21:14:32.980657 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-27 21:14:32.980668 | orchestrator | 2025-09-27 21:14:32.980679 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-09-27 21:14:32.980689 | orchestrator | Saturday 27 September 2025 21:14:32 +0000 (0:00:00.787) 0:03:29.467 **** 2025-09-27 21:14:32.980699 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-27 21:14:32.980711 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-27 21:14:32.980722 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-27 21:14:32.980733 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-27 21:14:32.980743 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-27 21:14:32.980754 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-27 21:14:32.980764 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-27 21:14:32.980775 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-27 21:14:32.980785 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-27 21:14:32.980796 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-27 21:14:32.980807 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-27 21:14:32.980817 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-27 21:14:32.980835 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-27 21:14:32.980846 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-27 21:14:32.980856 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-27 21:14:32.980867 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-27 21:14:32.980878 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-27 21:14:32.980902 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-27 21:14:43.637068 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-27 21:14:43.637216 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-27 21:14:43.637234 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-27 21:14:43.637246 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-27 21:14:43.637258 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:14:43.637289 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-27 21:14:43.637301 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-27 21:14:43.637312 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-27 21:14:43.637323 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-27 21:14:43.637334 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-27 21:14:43.637345 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-27 21:14:43.637362 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-27 21:14:43.637373 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-27 21:14:43.637384 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-27 21:14:43.637395 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:14:43.637406 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-27 21:14:43.637417 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-27 21:14:43.637428 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-27 21:14:43.637439 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-27 21:14:43.637450 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-27 21:14:43.637460 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-27 21:14:43.637471 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-27 21:14:43.637482 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-27 21:14:43.637492 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-27 21:14:43.637503 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:14:43.637514 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:14:43.637525 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-27 21:14:43.637536 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-27 21:14:43.637570 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-27 21:14:43.637582 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-27 21:14:43.637625 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-27 21:14:43.637637 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-27 21:14:43.637648 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-27 21:14:43.637658 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-27 21:14:43.637669 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-27 21:14:43.637680 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-27 21:14:43.637691 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-27 21:14:43.637701 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-27 21:14:43.637712 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-27 21:14:43.637723 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-27 21:14:43.637734 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-27 21:14:43.637745 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-27 21:14:43.637755 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-27 21:14:43.637767 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-27 21:14:43.637795 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-27 21:14:43.637807 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-27 21:14:43.637818 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-27 21:14:43.637829 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-27 21:14:43.637839 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-27 21:14:43.637856 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-27 21:14:43.637867 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-27 21:14:43.637878 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-27 21:14:43.637889 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-27 21:14:43.637899 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-27 21:14:43.637910 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-27 21:14:43.637920 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-27 21:14:43.637931 | orchestrator | 2025-09-27 21:14:43.637943 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-09-27 21:14:43.637954 | orchestrator | Saturday 27 September 2025 21:14:40 +0000 (0:00:08.039) 0:03:37.506 **** 2025-09-27 21:14:43.637964 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-27 21:14:43.637975 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-27 21:14:43.637985 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-27 21:14:43.638005 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-27 21:14:43.638093 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-27 21:14:43.638106 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-27 21:14:43.638117 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-27 21:14:43.638128 | orchestrator | 2025-09-27 21:14:43.638139 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-09-27 21:14:43.638149 | orchestrator | Saturday 27 September 2025 21:14:41 +0000 (0:00:00.578) 0:03:38.085 **** 2025-09-27 21:14:43.638159 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-27 21:14:43.638170 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:14:43.638181 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-27 21:14:43.638191 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-27 21:14:43.638202 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:14:43.638212 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:14:43.638224 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-27 21:14:43.638234 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:14:43.638245 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-27 21:14:43.638255 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-27 21:14:43.638266 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-27 21:14:43.638276 | orchestrator | 2025-09-27 21:14:43.638287 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2025-09-27 21:14:43.638298 | orchestrator | Saturday 27 September 2025 21:14:42 +0000 (0:00:01.498) 0:03:39.583 **** 2025-09-27 21:14:43.638308 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-27 21:14:43.638319 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-27 21:14:43.638329 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:14:43.638340 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:14:43.638350 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-27 21:14:43.638361 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:14:43.638372 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-27 21:14:43.638382 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:14:43.638392 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-27 21:14:43.638403 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-27 21:14:43.638414 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-27 21:14:43.638424 | orchestrator | 2025-09-27 21:14:43.638444 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-09-27 21:14:56.905101 | orchestrator | Saturday 27 September 2025 21:14:43 +0000 (0:00:00.639) 0:03:40.223 **** 2025-09-27 21:14:56.905239 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-27 21:14:56.905256 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:14:56.905268 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-27 21:14:56.905279 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:14:56.905319 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-27 21:14:56.905331 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:14:56.905342 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-27 21:14:56.905353 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:14:56.905364 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-27 21:14:56.905375 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-27 21:14:56.905385 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-27 21:14:56.905396 | orchestrator | 2025-09-27 21:14:56.905408 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-09-27 21:14:56.905419 | orchestrator | Saturday 27 September 2025 21:14:44 +0000 (0:00:00.560) 0:03:40.784 **** 2025-09-27 21:14:56.905430 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:14:56.905441 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:14:56.905451 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:14:56.905462 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:14:56.905473 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:14:56.905483 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:14:56.905494 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:14:56.905504 | orchestrator | 2025-09-27 21:14:56.905515 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-09-27 21:14:56.905526 | orchestrator | Saturday 27 September 2025 21:14:44 +0000 (0:00:00.301) 0:03:41.085 **** 2025-09-27 21:14:56.905537 | orchestrator | ok: [testbed-manager] 2025-09-27 21:14:56.905549 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:14:56.905560 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:14:56.905571 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:14:56.905581 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:14:56.905592 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:14:56.905628 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:14:56.905639 | orchestrator | 2025-09-27 21:14:56.905649 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-09-27 21:14:56.905660 | orchestrator | Saturday 27 September 2025 21:14:50 +0000 (0:00:05.562) 0:03:46.648 **** 2025-09-27 21:14:56.905671 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-09-27 21:14:56.905682 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:14:56.905693 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-09-27 21:14:56.905704 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-09-27 21:14:56.905715 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:14:56.905725 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-09-27 21:14:56.905736 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:14:56.905747 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-09-27 21:14:56.905757 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:14:56.905768 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-09-27 21:14:56.905779 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:14:56.905789 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:14:56.905800 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-09-27 21:14:56.905810 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:14:56.905821 | orchestrator | 2025-09-27 21:14:56.905832 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-09-27 21:14:56.905843 | orchestrator | Saturday 27 September 2025 21:14:50 +0000 (0:00:00.324) 0:03:46.972 **** 2025-09-27 21:14:56.905853 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-09-27 21:14:56.905864 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-09-27 21:14:56.905874 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-09-27 21:14:56.905885 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-09-27 21:14:56.905902 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-09-27 21:14:56.905913 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-09-27 21:14:56.905924 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-09-27 21:14:56.905934 | orchestrator | 2025-09-27 21:14:56.905945 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-09-27 21:14:56.905956 | orchestrator | Saturday 27 September 2025 21:14:51 +0000 (0:00:01.255) 0:03:48.228 **** 2025-09-27 21:14:56.905968 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:14:56.905983 | orchestrator | 2025-09-27 21:14:56.905994 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-09-27 21:14:56.906005 | orchestrator | Saturday 27 September 2025 21:14:52 +0000 (0:00:00.472) 0:03:48.700 **** 2025-09-27 21:14:56.906079 | orchestrator | ok: [testbed-manager] 2025-09-27 21:14:56.906092 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:14:56.906102 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:14:56.906113 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:14:56.906124 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:14:56.906135 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:14:56.906145 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:14:56.906156 | orchestrator | 2025-09-27 21:14:56.906167 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-09-27 21:14:56.906178 | orchestrator | Saturday 27 September 2025 21:14:53 +0000 (0:00:01.748) 0:03:50.449 **** 2025-09-27 21:14:56.906189 | orchestrator | ok: [testbed-manager] 2025-09-27 21:14:56.906216 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:14:56.906228 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:14:56.906238 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:14:56.906249 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:14:56.906259 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:14:56.906269 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:14:56.906280 | orchestrator | 2025-09-27 21:14:56.906291 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-09-27 21:14:56.906302 | orchestrator | Saturday 27 September 2025 21:14:54 +0000 (0:00:00.657) 0:03:51.106 **** 2025-09-27 21:14:56.906312 | orchestrator | changed: [testbed-manager] 2025-09-27 21:14:56.906323 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:14:56.906334 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:14:56.906345 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:14:56.906355 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:14:56.906366 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:14:56.906377 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:14:56.906387 | orchestrator | 2025-09-27 21:14:56.906398 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-09-27 21:14:56.906409 | orchestrator | Saturday 27 September 2025 21:14:55 +0000 (0:00:00.708) 0:03:51.815 **** 2025-09-27 21:14:56.906420 | orchestrator | ok: [testbed-manager] 2025-09-27 21:14:56.906430 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:14:56.906441 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:14:56.906452 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:14:56.906462 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:14:56.906473 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:14:56.906483 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:14:56.906494 | orchestrator | 2025-09-27 21:14:56.906505 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-09-27 21:14:56.906515 | orchestrator | Saturday 27 September 2025 21:14:55 +0000 (0:00:00.614) 0:03:52.430 **** 2025-09-27 21:14:56.906531 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1759006200.1295104, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:14:56.906553 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1759006232.7540405, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:14:56.906571 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1759006232.3985548, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:14:56.906583 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1759006249.5167265, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:14:56.906595 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1759006232.1683636, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:14:56.906646 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1759006228.6771803, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:15:13.212430 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1759006242.3793454, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:15:13.212563 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:15:13.212596 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:15:13.212658 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:15:13.212667 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:15:13.212674 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:15:13.212681 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:15:13.212720 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:15:13.212728 | orchestrator | 2025-09-27 21:15:13.212736 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-09-27 21:15:13.212745 | orchestrator | Saturday 27 September 2025 21:14:56 +0000 (0:00:01.055) 0:03:53.485 **** 2025-09-27 21:15:13.212751 | orchestrator | changed: [testbed-manager] 2025-09-27 21:15:13.212769 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:15:13.212776 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:15:13.212782 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:15:13.212787 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:15:13.212793 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:15:13.212800 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:15:13.212807 | orchestrator | 2025-09-27 21:15:13.212813 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-09-27 21:15:13.212820 | orchestrator | Saturday 27 September 2025 21:14:57 +0000 (0:00:01.089) 0:03:54.574 **** 2025-09-27 21:15:13.212826 | orchestrator | changed: [testbed-manager] 2025-09-27 21:15:13.212832 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:15:13.212838 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:15:13.212844 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:15:13.212850 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:15:13.212856 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:15:13.212861 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:15:13.212868 | orchestrator | 2025-09-27 21:15:13.212874 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-09-27 21:15:13.212880 | orchestrator | Saturday 27 September 2025 21:14:59 +0000 (0:00:01.180) 0:03:55.755 **** 2025-09-27 21:15:13.212887 | orchestrator | changed: [testbed-manager] 2025-09-27 21:15:13.212893 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:15:13.212899 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:15:13.212905 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:15:13.212912 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:15:13.212918 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:15:13.212924 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:15:13.212931 | orchestrator | 2025-09-27 21:15:13.212937 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-09-27 21:15:13.212945 | orchestrator | Saturday 27 September 2025 21:15:00 +0000 (0:00:01.220) 0:03:56.976 **** 2025-09-27 21:15:13.212951 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:15:13.212958 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:15:13.212964 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:15:13.212971 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:15:13.212978 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:15:13.212985 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:15:13.212991 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:15:13.212998 | orchestrator | 2025-09-27 21:15:13.213004 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-09-27 21:15:13.213011 | orchestrator | Saturday 27 September 2025 21:15:00 +0000 (0:00:00.275) 0:03:57.251 **** 2025-09-27 21:15:13.213018 | orchestrator | ok: [testbed-manager] 2025-09-27 21:15:13.213026 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:15:13.213033 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:15:13.213040 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:15:13.213047 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:15:13.213053 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:15:13.213061 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:15:13.213068 | orchestrator | 2025-09-27 21:15:13.213075 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-09-27 21:15:13.213081 | orchestrator | Saturday 27 September 2025 21:15:01 +0000 (0:00:00.783) 0:03:58.035 **** 2025-09-27 21:15:13.213089 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:15:13.213097 | orchestrator | 2025-09-27 21:15:13.213104 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-09-27 21:15:13.213111 | orchestrator | Saturday 27 September 2025 21:15:01 +0000 (0:00:00.403) 0:03:58.438 **** 2025-09-27 21:15:13.213118 | orchestrator | ok: [testbed-manager] 2025-09-27 21:15:13.213125 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:15:13.213139 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:15:13.213146 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:15:13.213153 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:15:13.213160 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:15:13.213166 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:15:13.213174 | orchestrator | 2025-09-27 21:15:13.213181 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-09-27 21:15:13.213188 | orchestrator | Saturday 27 September 2025 21:15:10 +0000 (0:00:08.771) 0:04:07.209 **** 2025-09-27 21:15:13.213194 | orchestrator | ok: [testbed-manager] 2025-09-27 21:15:13.213200 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:15:13.213206 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:15:13.213212 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:15:13.213218 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:15:13.213224 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:15:13.213231 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:15:13.213239 | orchestrator | 2025-09-27 21:15:13.213246 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-09-27 21:15:13.213254 | orchestrator | Saturday 27 September 2025 21:15:12 +0000 (0:00:01.436) 0:04:08.646 **** 2025-09-27 21:15:13.213261 | orchestrator | ok: [testbed-manager] 2025-09-27 21:15:13.213268 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:15:13.213275 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:15:13.213282 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:15:13.213288 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:15:13.213295 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:15:13.213302 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:15:13.213308 | orchestrator | 2025-09-27 21:15:13.213331 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-09-27 21:16:22.841445 | orchestrator | Saturday 27 September 2025 21:15:13 +0000 (0:00:01.148) 0:04:09.794 **** 2025-09-27 21:16:22.841553 | orchestrator | ok: [testbed-manager] 2025-09-27 21:16:22.841569 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:16:22.841580 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:16:22.841591 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:16:22.841602 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:16:22.841613 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:16:22.841624 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:16:22.841663 | orchestrator | 2025-09-27 21:16:22.841678 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-09-27 21:16:22.841690 | orchestrator | Saturday 27 September 2025 21:15:13 +0000 (0:00:00.276) 0:04:10.071 **** 2025-09-27 21:16:22.841701 | orchestrator | ok: [testbed-manager] 2025-09-27 21:16:22.841712 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:16:22.841723 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:16:22.841734 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:16:22.841744 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:16:22.841754 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:16:22.841765 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:16:22.841775 | orchestrator | 2025-09-27 21:16:22.841786 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-09-27 21:16:22.841798 | orchestrator | Saturday 27 September 2025 21:15:13 +0000 (0:00:00.320) 0:04:10.391 **** 2025-09-27 21:16:22.841809 | orchestrator | ok: [testbed-manager] 2025-09-27 21:16:22.841819 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:16:22.841830 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:16:22.841840 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:16:22.841852 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:16:22.841862 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:16:22.841872 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:16:22.841883 | orchestrator | 2025-09-27 21:16:22.841894 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-09-27 21:16:22.841905 | orchestrator | Saturday 27 September 2025 21:15:14 +0000 (0:00:00.230) 0:04:10.622 **** 2025-09-27 21:16:22.841916 | orchestrator | ok: [testbed-manager] 2025-09-27 21:16:22.841950 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:16:22.841961 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:16:22.841971 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:16:22.841982 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:16:22.841994 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:16:22.842005 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:16:22.842071 | orchestrator | 2025-09-27 21:16:22.842084 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-09-27 21:16:22.842097 | orchestrator | Saturday 27 September 2025 21:15:19 +0000 (0:00:05.394) 0:04:16.017 **** 2025-09-27 21:16:22.842111 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:16:22.842126 | orchestrator | 2025-09-27 21:16:22.842139 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-09-27 21:16:22.842151 | orchestrator | Saturday 27 September 2025 21:15:19 +0000 (0:00:00.383) 0:04:16.400 **** 2025-09-27 21:16:22.842164 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-09-27 21:16:22.842177 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-09-27 21:16:22.842189 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-09-27 21:16:22.842202 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-09-27 21:16:22.842215 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:16:22.842227 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-09-27 21:16:22.842239 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:16:22.842251 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-09-27 21:16:22.842263 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-09-27 21:16:22.842275 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-09-27 21:16:22.842288 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:16:22.842300 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-09-27 21:16:22.842312 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-09-27 21:16:22.842325 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:16:22.842337 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-09-27 21:16:22.842349 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:16:22.842362 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-09-27 21:16:22.842372 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:16:22.842383 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-09-27 21:16:22.842394 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-09-27 21:16:22.842404 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:16:22.842415 | orchestrator | 2025-09-27 21:16:22.842426 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-09-27 21:16:22.842437 | orchestrator | Saturday 27 September 2025 21:15:20 +0000 (0:00:00.303) 0:04:16.703 **** 2025-09-27 21:16:22.842448 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:16:22.842459 | orchestrator | 2025-09-27 21:16:22.842470 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-09-27 21:16:22.842481 | orchestrator | Saturday 27 September 2025 21:15:20 +0000 (0:00:00.356) 0:04:17.059 **** 2025-09-27 21:16:22.842492 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-09-27 21:16:22.842502 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:16:22.842513 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-09-27 21:16:22.842524 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:16:22.842535 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-09-27 21:16:22.842571 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-09-27 21:16:22.842584 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:16:22.842595 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:16:22.842606 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-09-27 21:16:22.842616 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-09-27 21:16:22.842628 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:16:22.842714 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:16:22.842729 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-09-27 21:16:22.842740 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:16:22.842751 | orchestrator | 2025-09-27 21:16:22.842762 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-09-27 21:16:22.842773 | orchestrator | Saturday 27 September 2025 21:15:20 +0000 (0:00:00.302) 0:04:17.362 **** 2025-09-27 21:16:22.842784 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:16:22.842795 | orchestrator | 2025-09-27 21:16:22.842806 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-09-27 21:16:22.842817 | orchestrator | Saturday 27 September 2025 21:15:21 +0000 (0:00:00.357) 0:04:17.720 **** 2025-09-27 21:16:22.842827 | orchestrator | changed: [testbed-manager] 2025-09-27 21:16:22.842838 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:16:22.842849 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:16:22.842860 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:16:22.842871 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:16:22.842882 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:16:22.842893 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:16:22.842904 | orchestrator | 2025-09-27 21:16:22.842914 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-09-27 21:16:22.842925 | orchestrator | Saturday 27 September 2025 21:15:55 +0000 (0:00:34.091) 0:04:51.811 **** 2025-09-27 21:16:22.842936 | orchestrator | changed: [testbed-manager] 2025-09-27 21:16:22.842946 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:16:22.842957 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:16:22.842983 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:16:22.842994 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:16:22.843006 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:16:22.843016 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:16:22.843027 | orchestrator | 2025-09-27 21:16:22.843040 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-09-27 21:16:22.843058 | orchestrator | Saturday 27 September 2025 21:16:03 +0000 (0:00:08.408) 0:05:00.220 **** 2025-09-27 21:16:22.843076 | orchestrator | changed: [testbed-manager] 2025-09-27 21:16:22.843092 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:16:22.843107 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:16:22.843125 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:16:22.843144 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:16:22.843164 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:16:22.843184 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:16:22.843201 | orchestrator | 2025-09-27 21:16:22.843212 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-09-27 21:16:22.843223 | orchestrator | Saturday 27 September 2025 21:16:11 +0000 (0:00:08.085) 0:05:08.306 **** 2025-09-27 21:16:22.843233 | orchestrator | ok: [testbed-manager] 2025-09-27 21:16:22.843244 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:16:22.843255 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:16:22.843267 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:16:22.843277 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:16:22.843310 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:16:22.843321 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:16:22.843346 | orchestrator | 2025-09-27 21:16:22.843358 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-09-27 21:16:22.843383 | orchestrator | Saturday 27 September 2025 21:16:13 +0000 (0:00:01.841) 0:05:10.148 **** 2025-09-27 21:16:22.843395 | orchestrator | changed: [testbed-manager] 2025-09-27 21:16:22.843405 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:16:22.843416 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:16:22.843427 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:16:22.843437 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:16:22.843448 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:16:22.843458 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:16:22.843469 | orchestrator | 2025-09-27 21:16:22.843480 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-09-27 21:16:22.843490 | orchestrator | Saturday 27 September 2025 21:16:19 +0000 (0:00:06.242) 0:05:16.390 **** 2025-09-27 21:16:22.843503 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:16:22.843516 | orchestrator | 2025-09-27 21:16:22.843527 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-09-27 21:16:22.843538 | orchestrator | Saturday 27 September 2025 21:16:20 +0000 (0:00:00.494) 0:05:16.884 **** 2025-09-27 21:16:22.843549 | orchestrator | changed: [testbed-manager] 2025-09-27 21:16:22.843559 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:16:22.843570 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:16:22.843580 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:16:22.843591 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:16:22.843602 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:16:22.843613 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:16:22.843624 | orchestrator | 2025-09-27 21:16:22.843653 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-09-27 21:16:22.843665 | orchestrator | Saturday 27 September 2025 21:16:20 +0000 (0:00:00.713) 0:05:17.598 **** 2025-09-27 21:16:22.843676 | orchestrator | ok: [testbed-manager] 2025-09-27 21:16:22.843686 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:16:22.843697 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:16:22.843708 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:16:22.843734 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:16:38.403189 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:16:38.403329 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:16:38.403350 | orchestrator | 2025-09-27 21:16:38.403363 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-09-27 21:16:38.403376 | orchestrator | Saturday 27 September 2025 21:16:22 +0000 (0:00:01.815) 0:05:19.414 **** 2025-09-27 21:16:38.403387 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:16:38.403398 | orchestrator | changed: [testbed-manager] 2025-09-27 21:16:38.403409 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:16:38.403420 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:16:38.403430 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:16:38.403441 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:16:38.403451 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:16:38.403462 | orchestrator | 2025-09-27 21:16:38.403473 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-09-27 21:16:38.403483 | orchestrator | Saturday 27 September 2025 21:16:23 +0000 (0:00:00.856) 0:05:20.271 **** 2025-09-27 21:16:38.403494 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:16:38.403504 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:16:38.403514 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:16:38.403524 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:16:38.403535 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:16:38.403545 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:16:38.403556 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:16:38.403566 | orchestrator | 2025-09-27 21:16:38.403603 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-09-27 21:16:38.403614 | orchestrator | Saturday 27 September 2025 21:16:23 +0000 (0:00:00.269) 0:05:20.540 **** 2025-09-27 21:16:38.403624 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:16:38.403635 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:16:38.403673 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:16:38.403684 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:16:38.403696 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:16:38.403708 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:16:38.403720 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:16:38.403732 | orchestrator | 2025-09-27 21:16:38.403744 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-09-27 21:16:38.403755 | orchestrator | Saturday 27 September 2025 21:16:24 +0000 (0:00:00.372) 0:05:20.912 **** 2025-09-27 21:16:38.403767 | orchestrator | ok: [testbed-manager] 2025-09-27 21:16:38.403778 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:16:38.403790 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:16:38.403802 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:16:38.403813 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:16:38.403825 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:16:38.403836 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:16:38.403848 | orchestrator | 2025-09-27 21:16:38.403860 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-09-27 21:16:38.403872 | orchestrator | Saturday 27 September 2025 21:16:24 +0000 (0:00:00.277) 0:05:21.190 **** 2025-09-27 21:16:38.403884 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:16:38.403896 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:16:38.403908 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:16:38.403920 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:16:38.403932 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:16:38.403944 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:16:38.403955 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:16:38.403967 | orchestrator | 2025-09-27 21:16:38.403979 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-09-27 21:16:38.403991 | orchestrator | Saturday 27 September 2025 21:16:24 +0000 (0:00:00.249) 0:05:21.439 **** 2025-09-27 21:16:38.404002 | orchestrator | ok: [testbed-manager] 2025-09-27 21:16:38.404012 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:16:38.404022 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:16:38.404033 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:16:38.404043 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:16:38.404053 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:16:38.404063 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:16:38.404074 | orchestrator | 2025-09-27 21:16:38.404084 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-09-27 21:16:38.404095 | orchestrator | Saturday 27 September 2025 21:16:25 +0000 (0:00:00.292) 0:05:21.732 **** 2025-09-27 21:16:38.404105 | orchestrator | ok: [testbed-manager] =>  2025-09-27 21:16:38.404116 | orchestrator |  docker_version: 5:27.5.1 2025-09-27 21:16:38.404126 | orchestrator | ok: [testbed-node-3] =>  2025-09-27 21:16:38.404136 | orchestrator |  docker_version: 5:27.5.1 2025-09-27 21:16:38.404146 | orchestrator | ok: [testbed-node-4] =>  2025-09-27 21:16:38.404157 | orchestrator |  docker_version: 5:27.5.1 2025-09-27 21:16:38.404167 | orchestrator | ok: [testbed-node-5] =>  2025-09-27 21:16:38.404177 | orchestrator |  docker_version: 5:27.5.1 2025-09-27 21:16:38.404187 | orchestrator | ok: [testbed-node-0] =>  2025-09-27 21:16:38.404197 | orchestrator |  docker_version: 5:27.5.1 2025-09-27 21:16:38.404207 | orchestrator | ok: [testbed-node-1] =>  2025-09-27 21:16:38.404217 | orchestrator |  docker_version: 5:27.5.1 2025-09-27 21:16:38.404227 | orchestrator | ok: [testbed-node-2] =>  2025-09-27 21:16:38.404238 | orchestrator |  docker_version: 5:27.5.1 2025-09-27 21:16:38.404248 | orchestrator | 2025-09-27 21:16:38.404258 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-09-27 21:16:38.404269 | orchestrator | Saturday 27 September 2025 21:16:25 +0000 (0:00:00.257) 0:05:21.989 **** 2025-09-27 21:16:38.404288 | orchestrator | ok: [testbed-manager] =>  2025-09-27 21:16:38.404299 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-27 21:16:38.404309 | orchestrator | ok: [testbed-node-3] =>  2025-09-27 21:16:38.404319 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-27 21:16:38.404330 | orchestrator | ok: [testbed-node-4] =>  2025-09-27 21:16:38.404340 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-27 21:16:38.404350 | orchestrator | ok: [testbed-node-5] =>  2025-09-27 21:16:38.404360 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-27 21:16:38.404370 | orchestrator | ok: [testbed-node-0] =>  2025-09-27 21:16:38.404380 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-27 21:16:38.404390 | orchestrator | ok: [testbed-node-1] =>  2025-09-27 21:16:38.404401 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-27 21:16:38.404411 | orchestrator | ok: [testbed-node-2] =>  2025-09-27 21:16:38.404421 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-27 21:16:38.404432 | orchestrator | 2025-09-27 21:16:38.404442 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-09-27 21:16:38.404487 | orchestrator | Saturday 27 September 2025 21:16:25 +0000 (0:00:00.273) 0:05:22.262 **** 2025-09-27 21:16:38.404499 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:16:38.404510 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:16:38.404520 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:16:38.404530 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:16:38.404541 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:16:38.404551 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:16:38.404561 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:16:38.404571 | orchestrator | 2025-09-27 21:16:38.404582 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-09-27 21:16:38.404592 | orchestrator | Saturday 27 September 2025 21:16:25 +0000 (0:00:00.244) 0:05:22.507 **** 2025-09-27 21:16:38.404603 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:16:38.404613 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:16:38.404623 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:16:38.404633 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:16:38.404680 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:16:38.404693 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:16:38.404703 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:16:38.404714 | orchestrator | 2025-09-27 21:16:38.404724 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-09-27 21:16:38.404735 | orchestrator | Saturday 27 September 2025 21:16:26 +0000 (0:00:00.312) 0:05:22.820 **** 2025-09-27 21:16:38.404747 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:16:38.404761 | orchestrator | 2025-09-27 21:16:38.404772 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-09-27 21:16:38.404782 | orchestrator | Saturday 27 September 2025 21:16:26 +0000 (0:00:00.398) 0:05:23.218 **** 2025-09-27 21:16:38.404792 | orchestrator | ok: [testbed-manager] 2025-09-27 21:16:38.404803 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:16:38.404813 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:16:38.404823 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:16:38.404834 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:16:38.404844 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:16:38.404854 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:16:38.404864 | orchestrator | 2025-09-27 21:16:38.404875 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-09-27 21:16:38.404885 | orchestrator | Saturday 27 September 2025 21:16:27 +0000 (0:00:00.818) 0:05:24.037 **** 2025-09-27 21:16:38.404896 | orchestrator | ok: [testbed-manager] 2025-09-27 21:16:38.404906 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:16:38.404916 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:16:38.404936 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:16:38.404953 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:16:38.404971 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:16:38.404988 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:16:38.405004 | orchestrator | 2025-09-27 21:16:38.405021 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-09-27 21:16:38.405039 | orchestrator | Saturday 27 September 2025 21:16:30 +0000 (0:00:03.210) 0:05:27.247 **** 2025-09-27 21:16:38.405056 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-09-27 21:16:38.405074 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-09-27 21:16:38.405092 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-09-27 21:16:38.405110 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:16:38.405127 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-09-27 21:16:38.405144 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-09-27 21:16:38.405162 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-09-27 21:16:38.405182 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-09-27 21:16:38.405201 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-09-27 21:16:38.405220 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-09-27 21:16:38.405238 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:16:38.405256 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-09-27 21:16:38.405274 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-09-27 21:16:38.405292 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-09-27 21:16:38.405310 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:16:38.405327 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-09-27 21:16:38.405344 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-09-27 21:16:38.405362 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-09-27 21:16:38.405380 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:16:38.405398 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-09-27 21:16:38.405415 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-09-27 21:16:38.405432 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-09-27 21:16:38.405450 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:16:38.405468 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:16:38.405487 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-09-27 21:16:38.405505 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-09-27 21:16:38.405524 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-09-27 21:16:38.405544 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:16:38.405555 | orchestrator | 2025-09-27 21:16:38.405566 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-09-27 21:16:38.405577 | orchestrator | Saturday 27 September 2025 21:16:31 +0000 (0:00:00.554) 0:05:27.802 **** 2025-09-27 21:16:38.405587 | orchestrator | ok: [testbed-manager] 2025-09-27 21:16:38.405598 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:16:38.405608 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:16:38.405618 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:16:38.405629 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:16:38.405639 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:16:38.405698 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:16:38.405709 | orchestrator | 2025-09-27 21:16:38.405744 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-09-27 21:17:34.446891 | orchestrator | Saturday 27 September 2025 21:16:38 +0000 (0:00:07.180) 0:05:34.982 **** 2025-09-27 21:17:34.447017 | orchestrator | ok: [testbed-manager] 2025-09-27 21:17:34.447036 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:17:34.447048 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:17:34.447059 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:17:34.447092 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:17:34.447104 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:17:34.447115 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:17:34.447126 | orchestrator | 2025-09-27 21:17:34.447137 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-09-27 21:17:34.447148 | orchestrator | Saturday 27 September 2025 21:16:39 +0000 (0:00:01.277) 0:05:36.260 **** 2025-09-27 21:17:34.447159 | orchestrator | ok: [testbed-manager] 2025-09-27 21:17:34.447169 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:17:34.447180 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:17:34.447191 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:17:34.447202 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:17:34.447213 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:17:34.447224 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:17:34.447234 | orchestrator | 2025-09-27 21:17:34.447245 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-09-27 21:17:34.447256 | orchestrator | Saturday 27 September 2025 21:16:48 +0000 (0:00:08.989) 0:05:45.250 **** 2025-09-27 21:17:34.447266 | orchestrator | changed: [testbed-manager] 2025-09-27 21:17:34.447277 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:17:34.447287 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:17:34.447298 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:17:34.447309 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:17:34.447319 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:17:34.447330 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:17:34.447340 | orchestrator | 2025-09-27 21:17:34.447351 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-09-27 21:17:34.447362 | orchestrator | Saturday 27 September 2025 21:16:51 +0000 (0:00:03.160) 0:05:48.410 **** 2025-09-27 21:17:34.447372 | orchestrator | ok: [testbed-manager] 2025-09-27 21:17:34.447383 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:17:34.447393 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:17:34.447404 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:17:34.447414 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:17:34.447425 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:17:34.447435 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:17:34.447446 | orchestrator | 2025-09-27 21:17:34.447459 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-09-27 21:17:34.447471 | orchestrator | Saturday 27 September 2025 21:16:53 +0000 (0:00:01.386) 0:05:49.797 **** 2025-09-27 21:17:34.447483 | orchestrator | ok: [testbed-manager] 2025-09-27 21:17:34.447495 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:17:34.447506 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:17:34.447518 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:17:34.447531 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:17:34.447543 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:17:34.447553 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:17:34.447564 | orchestrator | 2025-09-27 21:17:34.447575 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-09-27 21:17:34.447585 | orchestrator | Saturday 27 September 2025 21:16:54 +0000 (0:00:01.412) 0:05:51.210 **** 2025-09-27 21:17:34.447596 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:17:34.447607 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:17:34.447617 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:17:34.447627 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:17:34.447638 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:17:34.447649 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:17:34.447659 | orchestrator | changed: [testbed-manager] 2025-09-27 21:17:34.447670 | orchestrator | 2025-09-27 21:17:34.447709 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-09-27 21:17:34.447721 | orchestrator | Saturday 27 September 2025 21:16:55 +0000 (0:00:00.731) 0:05:51.941 **** 2025-09-27 21:17:34.447731 | orchestrator | ok: [testbed-manager] 2025-09-27 21:17:34.447750 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:17:34.447761 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:17:34.447772 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:17:34.447782 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:17:34.447793 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:17:34.447804 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:17:34.447814 | orchestrator | 2025-09-27 21:17:34.447825 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-09-27 21:17:34.447836 | orchestrator | Saturday 27 September 2025 21:17:05 +0000 (0:00:10.388) 0:06:02.330 **** 2025-09-27 21:17:34.447847 | orchestrator | changed: [testbed-manager] 2025-09-27 21:17:34.447857 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:17:34.447868 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:17:34.447878 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:17:34.447889 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:17:34.447899 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:17:34.447909 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:17:34.447920 | orchestrator | 2025-09-27 21:17:34.447931 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-09-27 21:17:34.447941 | orchestrator | Saturday 27 September 2025 21:17:06 +0000 (0:00:00.911) 0:06:03.242 **** 2025-09-27 21:17:34.447952 | orchestrator | ok: [testbed-manager] 2025-09-27 21:17:34.447962 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:17:34.447973 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:17:34.447984 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:17:34.447994 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:17:34.448004 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:17:34.448015 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:17:34.448025 | orchestrator | 2025-09-27 21:17:34.448036 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-09-27 21:17:34.448047 | orchestrator | Saturday 27 September 2025 21:17:16 +0000 (0:00:09.713) 0:06:12.955 **** 2025-09-27 21:17:34.448057 | orchestrator | ok: [testbed-manager] 2025-09-27 21:17:34.448067 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:17:34.448078 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:17:34.448089 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:17:34.448110 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:17:34.448121 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:17:34.448147 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:17:34.448159 | orchestrator | 2025-09-27 21:17:34.448170 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-09-27 21:17:34.448181 | orchestrator | Saturday 27 September 2025 21:17:27 +0000 (0:00:11.028) 0:06:23.984 **** 2025-09-27 21:17:34.448192 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-09-27 21:17:34.448203 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-09-27 21:17:34.448214 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-09-27 21:17:34.448224 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-09-27 21:17:34.448235 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-09-27 21:17:34.448245 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-09-27 21:17:34.448256 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-09-27 21:17:34.448267 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-09-27 21:17:34.448277 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-09-27 21:17:34.448288 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-09-27 21:17:34.448298 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-09-27 21:17:34.448309 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-09-27 21:17:34.448319 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-09-27 21:17:34.448330 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-09-27 21:17:34.448341 | orchestrator | 2025-09-27 21:17:34.448351 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-09-27 21:17:34.448369 | orchestrator | Saturday 27 September 2025 21:17:28 +0000 (0:00:01.167) 0:06:25.152 **** 2025-09-27 21:17:34.448380 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:17:34.448390 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:17:34.448401 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:17:34.448412 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:17:34.448422 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:17:34.448433 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:17:34.448443 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:17:34.448454 | orchestrator | 2025-09-27 21:17:34.448464 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-09-27 21:17:34.448475 | orchestrator | Saturday 27 September 2025 21:17:29 +0000 (0:00:00.496) 0:06:25.649 **** 2025-09-27 21:17:34.448486 | orchestrator | ok: [testbed-manager] 2025-09-27 21:17:34.448497 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:17:34.448507 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:17:34.448518 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:17:34.448528 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:17:34.448539 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:17:34.448549 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:17:34.448560 | orchestrator | 2025-09-27 21:17:34.448571 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-09-27 21:17:34.448583 | orchestrator | Saturday 27 September 2025 21:17:32 +0000 (0:00:03.748) 0:06:29.397 **** 2025-09-27 21:17:34.448594 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:17:34.448604 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:17:34.448615 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:17:34.448625 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:17:34.448636 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:17:34.448646 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:17:34.448656 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:17:34.448667 | orchestrator | 2025-09-27 21:17:34.448699 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-09-27 21:17:34.448710 | orchestrator | Saturday 27 September 2025 21:17:33 +0000 (0:00:00.486) 0:06:29.884 **** 2025-09-27 21:17:34.448721 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-09-27 21:17:34.448732 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-09-27 21:17:34.448743 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:17:34.448753 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-09-27 21:17:34.448764 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-09-27 21:17:34.448774 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:17:34.448785 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-09-27 21:17:34.448795 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-09-27 21:17:34.448806 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:17:34.448816 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-09-27 21:17:34.448827 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-09-27 21:17:34.448837 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:17:34.448848 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-09-27 21:17:34.448858 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-09-27 21:17:34.448869 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:17:34.448879 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-09-27 21:17:34.448890 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-09-27 21:17:34.448900 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:17:34.448911 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-09-27 21:17:34.448921 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-09-27 21:17:34.448932 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:17:34.448950 | orchestrator | 2025-09-27 21:17:34.448960 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-09-27 21:17:34.448971 | orchestrator | Saturday 27 September 2025 21:17:33 +0000 (0:00:00.662) 0:06:30.546 **** 2025-09-27 21:17:34.448982 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:17:34.448993 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:17:34.449003 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:17:34.449014 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:17:34.449024 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:17:34.449035 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:17:34.449045 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:17:34.449056 | orchestrator | 2025-09-27 21:17:34.449073 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-09-27 21:17:54.756818 | orchestrator | Saturday 27 September 2025 21:17:34 +0000 (0:00:00.485) 0:06:31.031 **** 2025-09-27 21:17:54.756938 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:17:54.756954 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:17:54.756966 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:17:54.756977 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:17:54.756988 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:17:54.756999 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:17:54.757010 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:17:54.757020 | orchestrator | 2025-09-27 21:17:54.757032 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-09-27 21:17:54.757043 | orchestrator | Saturday 27 September 2025 21:17:34 +0000 (0:00:00.457) 0:06:31.489 **** 2025-09-27 21:17:54.757053 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:17:54.757064 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:17:54.757074 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:17:54.757084 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:17:54.757095 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:17:54.757105 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:17:54.757116 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:17:54.757126 | orchestrator | 2025-09-27 21:17:54.757188 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-09-27 21:17:54.757201 | orchestrator | Saturday 27 September 2025 21:17:35 +0000 (0:00:00.493) 0:06:31.982 **** 2025-09-27 21:17:54.757212 | orchestrator | ok: [testbed-manager] 2025-09-27 21:17:54.757223 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:17:54.757234 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:17:54.757244 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:17:54.757255 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:17:54.757266 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:17:54.757278 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:17:54.757289 | orchestrator | 2025-09-27 21:17:54.757301 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-09-27 21:17:54.757313 | orchestrator | Saturday 27 September 2025 21:17:36 +0000 (0:00:01.544) 0:06:33.526 **** 2025-09-27 21:17:54.757326 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:17:54.757341 | orchestrator | 2025-09-27 21:17:54.757354 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-09-27 21:17:54.757366 | orchestrator | Saturday 27 September 2025 21:17:37 +0000 (0:00:00.946) 0:06:34.473 **** 2025-09-27 21:17:54.757378 | orchestrator | ok: [testbed-manager] 2025-09-27 21:17:54.757390 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:17:54.757403 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:17:54.757414 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:17:54.757426 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:17:54.757438 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:17:54.757449 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:17:54.757491 | orchestrator | 2025-09-27 21:17:54.757503 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-09-27 21:17:54.757515 | orchestrator | Saturday 27 September 2025 21:17:38 +0000 (0:00:00.827) 0:06:35.300 **** 2025-09-27 21:17:54.757527 | orchestrator | ok: [testbed-manager] 2025-09-27 21:17:54.757539 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:17:54.757550 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:17:54.757560 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:17:54.757570 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:17:54.757580 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:17:54.757591 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:17:54.757601 | orchestrator | 2025-09-27 21:17:54.757611 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-09-27 21:17:54.757622 | orchestrator | Saturday 27 September 2025 21:17:39 +0000 (0:00:00.854) 0:06:36.155 **** 2025-09-27 21:17:54.757632 | orchestrator | ok: [testbed-manager] 2025-09-27 21:17:54.757643 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:17:54.757653 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:17:54.757663 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:17:54.757674 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:17:54.757705 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:17:54.757716 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:17:54.757726 | orchestrator | 2025-09-27 21:17:54.757737 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-09-27 21:17:54.757749 | orchestrator | Saturday 27 September 2025 21:17:40 +0000 (0:00:01.380) 0:06:37.535 **** 2025-09-27 21:17:54.757759 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:17:54.757769 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:17:54.757780 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:17:54.757790 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:17:54.757800 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:17:54.757810 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:17:54.757821 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:17:54.757831 | orchestrator | 2025-09-27 21:17:54.757842 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-09-27 21:17:54.757852 | orchestrator | Saturday 27 September 2025 21:17:42 +0000 (0:00:01.424) 0:06:38.960 **** 2025-09-27 21:17:54.757863 | orchestrator | ok: [testbed-manager] 2025-09-27 21:17:54.757873 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:17:54.757883 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:17:54.757894 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:17:54.757904 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:17:54.757914 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:17:54.757924 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:17:54.757935 | orchestrator | 2025-09-27 21:17:54.757945 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-09-27 21:17:54.757956 | orchestrator | Saturday 27 September 2025 21:17:43 +0000 (0:00:01.360) 0:06:40.320 **** 2025-09-27 21:17:54.757966 | orchestrator | changed: [testbed-manager] 2025-09-27 21:17:54.757976 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:17:54.757987 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:17:54.757997 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:17:54.758008 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:17:54.758076 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:17:54.758087 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:17:54.758097 | orchestrator | 2025-09-27 21:17:54.758127 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-09-27 21:17:54.758139 | orchestrator | Saturday 27 September 2025 21:17:45 +0000 (0:00:01.338) 0:06:41.658 **** 2025-09-27 21:17:54.758150 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:17:54.758161 | orchestrator | 2025-09-27 21:17:54.758172 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-09-27 21:17:54.758192 | orchestrator | Saturday 27 September 2025 21:17:46 +0000 (0:00:00.964) 0:06:42.623 **** 2025-09-27 21:17:54.758202 | orchestrator | ok: [testbed-manager] 2025-09-27 21:17:54.758213 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:17:54.758223 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:17:54.758234 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:17:54.758244 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:17:54.758255 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:17:54.758265 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:17:54.758276 | orchestrator | 2025-09-27 21:17:54.758287 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-09-27 21:17:54.758297 | orchestrator | Saturday 27 September 2025 21:17:47 +0000 (0:00:01.435) 0:06:44.059 **** 2025-09-27 21:17:54.758308 | orchestrator | ok: [testbed-manager] 2025-09-27 21:17:54.758319 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:17:54.758329 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:17:54.758339 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:17:54.758350 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:17:54.758360 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:17:54.758370 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:17:54.758381 | orchestrator | 2025-09-27 21:17:54.758392 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-09-27 21:17:54.758402 | orchestrator | Saturday 27 September 2025 21:17:48 +0000 (0:00:01.108) 0:06:45.168 **** 2025-09-27 21:17:54.758413 | orchestrator | ok: [testbed-manager] 2025-09-27 21:17:54.758423 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:17:54.758434 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:17:54.758444 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:17:54.758455 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:17:54.758465 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:17:54.758475 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:17:54.758486 | orchestrator | 2025-09-27 21:17:54.758496 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-09-27 21:17:54.758507 | orchestrator | Saturday 27 September 2025 21:17:49 +0000 (0:00:01.117) 0:06:46.285 **** 2025-09-27 21:17:54.758517 | orchestrator | ok: [testbed-manager] 2025-09-27 21:17:54.758528 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:17:54.758538 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:17:54.758549 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:17:54.758559 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:17:54.758569 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:17:54.758580 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:17:54.758590 | orchestrator | 2025-09-27 21:17:54.758601 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-09-27 21:17:54.758611 | orchestrator | Saturday 27 September 2025 21:17:50 +0000 (0:00:01.151) 0:06:47.437 **** 2025-09-27 21:17:54.758622 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:17:54.758633 | orchestrator | 2025-09-27 21:17:54.758644 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-27 21:17:54.758654 | orchestrator | Saturday 27 September 2025 21:17:51 +0000 (0:00:01.014) 0:06:48.451 **** 2025-09-27 21:17:54.758665 | orchestrator | 2025-09-27 21:17:54.758675 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-27 21:17:54.758710 | orchestrator | Saturday 27 September 2025 21:17:51 +0000 (0:00:00.038) 0:06:48.489 **** 2025-09-27 21:17:54.758721 | orchestrator | 2025-09-27 21:17:54.758732 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-27 21:17:54.758742 | orchestrator | Saturday 27 September 2025 21:17:51 +0000 (0:00:00.045) 0:06:48.535 **** 2025-09-27 21:17:54.758752 | orchestrator | 2025-09-27 21:17:54.758763 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-27 21:17:54.758773 | orchestrator | Saturday 27 September 2025 21:17:51 +0000 (0:00:00.038) 0:06:48.573 **** 2025-09-27 21:17:54.758795 | orchestrator | 2025-09-27 21:17:54.758806 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-27 21:17:54.758817 | orchestrator | Saturday 27 September 2025 21:17:52 +0000 (0:00:00.038) 0:06:48.612 **** 2025-09-27 21:17:54.758827 | orchestrator | 2025-09-27 21:17:54.758837 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-27 21:17:54.758848 | orchestrator | Saturday 27 September 2025 21:17:52 +0000 (0:00:00.046) 0:06:48.659 **** 2025-09-27 21:17:54.758858 | orchestrator | 2025-09-27 21:17:54.758869 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-27 21:17:54.758879 | orchestrator | Saturday 27 September 2025 21:17:52 +0000 (0:00:00.040) 0:06:48.700 **** 2025-09-27 21:17:54.758890 | orchestrator | 2025-09-27 21:17:54.758900 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-27 21:17:54.758911 | orchestrator | Saturday 27 September 2025 21:17:52 +0000 (0:00:00.040) 0:06:48.740 **** 2025-09-27 21:17:54.758921 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:17:54.758932 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:17:54.758942 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:17:54.758953 | orchestrator | 2025-09-27 21:17:54.758963 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-09-27 21:17:54.758973 | orchestrator | Saturday 27 September 2025 21:17:53 +0000 (0:00:01.203) 0:06:49.943 **** 2025-09-27 21:17:54.758984 | orchestrator | changed: [testbed-manager] 2025-09-27 21:17:54.758994 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:17:54.759012 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:17:54.759023 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:17:54.759033 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:17:54.759050 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:18:24.634909 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:18:24.635054 | orchestrator | 2025-09-27 21:18:24.635071 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-09-27 21:18:24.635084 | orchestrator | Saturday 27 September 2025 21:17:54 +0000 (0:00:01.391) 0:06:51.335 **** 2025-09-27 21:18:24.635094 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:18:24.635104 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:18:24.635114 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:18:24.635123 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:18:24.635133 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:18:24.635142 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:18:24.635152 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:18:24.635162 | orchestrator | 2025-09-27 21:18:24.635172 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-09-27 21:18:24.635182 | orchestrator | Saturday 27 September 2025 21:17:57 +0000 (0:00:02.536) 0:06:53.872 **** 2025-09-27 21:18:24.635191 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:18:24.635201 | orchestrator | 2025-09-27 21:18:24.635210 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-09-27 21:18:24.635220 | orchestrator | Saturday 27 September 2025 21:17:57 +0000 (0:00:00.110) 0:06:53.982 **** 2025-09-27 21:18:24.635230 | orchestrator | ok: [testbed-manager] 2025-09-27 21:18:24.635240 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:18:24.635250 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:18:24.635259 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:18:24.635269 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:18:24.635278 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:18:24.635288 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:18:24.635304 | orchestrator | 2025-09-27 21:18:24.635321 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-09-27 21:18:24.635343 | orchestrator | Saturday 27 September 2025 21:17:58 +0000 (0:00:01.036) 0:06:55.019 **** 2025-09-27 21:18:24.635366 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:18:24.635381 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:18:24.635433 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:18:24.635451 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:18:24.635468 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:18:24.635479 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:18:24.635490 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:18:24.635500 | orchestrator | 2025-09-27 21:18:24.635511 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-09-27 21:18:24.635522 | orchestrator | Saturday 27 September 2025 21:17:58 +0000 (0:00:00.544) 0:06:55.563 **** 2025-09-27 21:18:24.635533 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:18:24.635547 | orchestrator | 2025-09-27 21:18:24.635559 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-09-27 21:18:24.635570 | orchestrator | Saturday 27 September 2025 21:18:00 +0000 (0:00:01.052) 0:06:56.616 **** 2025-09-27 21:18:24.635580 | orchestrator | ok: [testbed-manager] 2025-09-27 21:18:24.635591 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:18:24.635602 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:18:24.635613 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:18:24.635624 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:18:24.635635 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:18:24.635646 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:18:24.635656 | orchestrator | 2025-09-27 21:18:24.635667 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-09-27 21:18:24.635677 | orchestrator | Saturday 27 September 2025 21:18:00 +0000 (0:00:00.856) 0:06:57.473 **** 2025-09-27 21:18:24.635688 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-09-27 21:18:24.635790 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-09-27 21:18:24.635812 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-09-27 21:18:24.635830 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-09-27 21:18:24.635847 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-09-27 21:18:24.635860 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-09-27 21:18:24.635869 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-09-27 21:18:24.635879 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-09-27 21:18:24.635889 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-09-27 21:18:24.635900 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-09-27 21:18:24.635918 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-09-27 21:18:24.635944 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-09-27 21:18:24.635965 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-09-27 21:18:24.635983 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-09-27 21:18:24.636002 | orchestrator | 2025-09-27 21:18:24.636021 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-09-27 21:18:24.636040 | orchestrator | Saturday 27 September 2025 21:18:03 +0000 (0:00:02.666) 0:07:00.140 **** 2025-09-27 21:18:24.636056 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:18:24.636067 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:18:24.636078 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:18:24.636089 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:18:24.636099 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:18:24.636110 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:18:24.636120 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:18:24.636131 | orchestrator | 2025-09-27 21:18:24.636141 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-09-27 21:18:24.636153 | orchestrator | Saturday 27 September 2025 21:18:04 +0000 (0:00:00.518) 0:07:00.658 **** 2025-09-27 21:18:24.636210 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:18:24.636238 | orchestrator | 2025-09-27 21:18:24.636249 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-09-27 21:18:24.636260 | orchestrator | Saturday 27 September 2025 21:18:05 +0000 (0:00:01.065) 0:07:01.723 **** 2025-09-27 21:18:24.636271 | orchestrator | ok: [testbed-manager] 2025-09-27 21:18:24.636284 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:18:24.636300 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:18:24.636311 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:18:24.636321 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:18:24.636335 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:18:24.636353 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:18:24.636378 | orchestrator | 2025-09-27 21:18:24.636400 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-09-27 21:18:24.636417 | orchestrator | Saturday 27 September 2025 21:18:05 +0000 (0:00:00.843) 0:07:02.567 **** 2025-09-27 21:18:24.636434 | orchestrator | ok: [testbed-manager] 2025-09-27 21:18:24.636451 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:18:24.636470 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:18:24.636480 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:18:24.636491 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:18:24.636501 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:18:24.636512 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:18:24.636522 | orchestrator | 2025-09-27 21:18:24.636533 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-09-27 21:18:24.636543 | orchestrator | Saturday 27 September 2025 21:18:06 +0000 (0:00:00.842) 0:07:03.410 **** 2025-09-27 21:18:24.636554 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:18:24.636564 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:18:24.636575 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:18:24.636585 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:18:24.636596 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:18:24.636606 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:18:24.636616 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:18:24.636627 | orchestrator | 2025-09-27 21:18:24.636637 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-09-27 21:18:24.636648 | orchestrator | Saturday 27 September 2025 21:18:07 +0000 (0:00:00.549) 0:07:03.959 **** 2025-09-27 21:18:24.636659 | orchestrator | ok: [testbed-manager] 2025-09-27 21:18:24.636669 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:18:24.636679 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:18:24.636690 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:18:24.636744 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:18:24.636760 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:18:24.636770 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:18:24.636781 | orchestrator | 2025-09-27 21:18:24.636792 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-09-27 21:18:24.636802 | orchestrator | Saturday 27 September 2025 21:18:09 +0000 (0:00:02.180) 0:07:06.139 **** 2025-09-27 21:18:24.636813 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:18:24.636824 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:18:24.636834 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:18:24.636845 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:18:24.636856 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:18:24.636866 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:18:24.636876 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:18:24.636887 | orchestrator | 2025-09-27 21:18:24.636899 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-09-27 21:18:24.636917 | orchestrator | Saturday 27 September 2025 21:18:10 +0000 (0:00:00.536) 0:07:06.675 **** 2025-09-27 21:18:24.636945 | orchestrator | ok: [testbed-manager] 2025-09-27 21:18:24.636965 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:18:24.636982 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:18:24.637016 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:18:24.637035 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:18:24.637049 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:18:24.637059 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:18:24.637069 | orchestrator | 2025-09-27 21:18:24.637080 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-09-27 21:18:24.637091 | orchestrator | Saturday 27 September 2025 21:18:18 +0000 (0:00:08.452) 0:07:15.128 **** 2025-09-27 21:18:24.637101 | orchestrator | ok: [testbed-manager] 2025-09-27 21:18:24.637111 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:18:24.637122 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:18:24.637132 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:18:24.637143 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:18:24.637153 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:18:24.637163 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:18:24.637174 | orchestrator | 2025-09-27 21:18:24.637184 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-09-27 21:18:24.637195 | orchestrator | Saturday 27 September 2025 21:18:19 +0000 (0:00:01.412) 0:07:16.540 **** 2025-09-27 21:18:24.637205 | orchestrator | ok: [testbed-manager] 2025-09-27 21:18:24.637216 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:18:24.637226 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:18:24.637236 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:18:24.637246 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:18:24.637257 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:18:24.637267 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:18:24.637277 | orchestrator | 2025-09-27 21:18:24.637288 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-09-27 21:18:24.637299 | orchestrator | Saturday 27 September 2025 21:18:21 +0000 (0:00:01.937) 0:07:18.478 **** 2025-09-27 21:18:24.637309 | orchestrator | ok: [testbed-manager] 2025-09-27 21:18:24.637320 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:18:24.637330 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:18:24.637340 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:18:24.637351 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:18:24.637361 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:18:24.637374 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:18:24.637392 | orchestrator | 2025-09-27 21:18:24.637408 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-27 21:18:24.637443 | orchestrator | Saturday 27 September 2025 21:18:23 +0000 (0:00:01.831) 0:07:20.309 **** 2025-09-27 21:18:24.637465 | orchestrator | ok: [testbed-manager] 2025-09-27 21:18:24.637482 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:18:24.637501 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:18:24.637519 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:18:24.637552 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:18:56.737422 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:18:56.737548 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:18:56.737563 | orchestrator | 2025-09-27 21:18:56.737576 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-27 21:18:56.737589 | orchestrator | Saturday 27 September 2025 21:18:24 +0000 (0:00:00.908) 0:07:21.218 **** 2025-09-27 21:18:56.737600 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:18:56.737612 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:18:56.737622 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:18:56.737633 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:18:56.737644 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:18:56.737655 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:18:56.737666 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:18:56.737677 | orchestrator | 2025-09-27 21:18:56.737726 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-09-27 21:18:56.737747 | orchestrator | Saturday 27 September 2025 21:18:25 +0000 (0:00:01.015) 0:07:22.233 **** 2025-09-27 21:18:56.737765 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:18:56.737817 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:18:56.737830 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:18:56.737841 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:18:56.737851 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:18:56.737862 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:18:56.737872 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:18:56.737883 | orchestrator | 2025-09-27 21:18:56.737894 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-09-27 21:18:56.737905 | orchestrator | Saturday 27 September 2025 21:18:26 +0000 (0:00:00.547) 0:07:22.781 **** 2025-09-27 21:18:56.737916 | orchestrator | ok: [testbed-manager] 2025-09-27 21:18:56.737926 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:18:56.737938 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:18:56.737950 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:18:56.737962 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:18:56.737975 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:18:56.737987 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:18:56.737999 | orchestrator | 2025-09-27 21:18:56.738011 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-09-27 21:18:56.738092 | orchestrator | Saturday 27 September 2025 21:18:26 +0000 (0:00:00.577) 0:07:23.358 **** 2025-09-27 21:18:56.738112 | orchestrator | ok: [testbed-manager] 2025-09-27 21:18:56.738129 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:18:56.738146 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:18:56.738163 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:18:56.738181 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:18:56.738199 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:18:56.738216 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:18:56.738234 | orchestrator | 2025-09-27 21:18:56.738252 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-09-27 21:18:56.738270 | orchestrator | Saturday 27 September 2025 21:18:27 +0000 (0:00:00.536) 0:07:23.895 **** 2025-09-27 21:18:56.738290 | orchestrator | ok: [testbed-manager] 2025-09-27 21:18:56.738309 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:18:56.738327 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:18:56.738338 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:18:56.738349 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:18:56.738359 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:18:56.738369 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:18:56.738380 | orchestrator | 2025-09-27 21:18:56.738390 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-09-27 21:18:56.738401 | orchestrator | Saturday 27 September 2025 21:18:27 +0000 (0:00:00.525) 0:07:24.421 **** 2025-09-27 21:18:56.738412 | orchestrator | ok: [testbed-manager] 2025-09-27 21:18:56.738422 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:18:56.738432 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:18:56.738443 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:18:56.738453 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:18:56.738463 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:18:56.738473 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:18:56.738484 | orchestrator | 2025-09-27 21:18:56.738494 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-09-27 21:18:56.738505 | orchestrator | Saturday 27 September 2025 21:18:33 +0000 (0:00:05.925) 0:07:30.347 **** 2025-09-27 21:18:56.738515 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:18:56.738526 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:18:56.738536 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:18:56.738547 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:18:56.738558 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:18:56.738568 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:18:56.738579 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:18:56.738590 | orchestrator | 2025-09-27 21:18:56.738600 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-09-27 21:18:56.738611 | orchestrator | Saturday 27 September 2025 21:18:34 +0000 (0:00:00.466) 0:07:30.813 **** 2025-09-27 21:18:56.738636 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:18:56.738650 | orchestrator | 2025-09-27 21:18:56.738661 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-09-27 21:18:56.738671 | orchestrator | Saturday 27 September 2025 21:18:34 +0000 (0:00:00.752) 0:07:31.565 **** 2025-09-27 21:18:56.738682 | orchestrator | ok: [testbed-manager] 2025-09-27 21:18:56.738753 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:18:56.738766 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:18:56.738776 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:18:56.738787 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:18:56.738797 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:18:56.738807 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:18:56.738818 | orchestrator | 2025-09-27 21:18:56.738828 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-09-27 21:18:56.738839 | orchestrator | Saturday 27 September 2025 21:18:37 +0000 (0:00:02.284) 0:07:33.850 **** 2025-09-27 21:18:56.738850 | orchestrator | ok: [testbed-manager] 2025-09-27 21:18:56.738860 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:18:56.738871 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:18:56.738881 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:18:56.738891 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:18:56.738902 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:18:56.738912 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:18:56.738922 | orchestrator | 2025-09-27 21:18:56.738968 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-09-27 21:18:56.738980 | orchestrator | Saturday 27 September 2025 21:18:38 +0000 (0:00:01.142) 0:07:34.992 **** 2025-09-27 21:18:56.738991 | orchestrator | ok: [testbed-manager] 2025-09-27 21:18:56.739002 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:18:56.739012 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:18:56.739022 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:18:56.739033 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:18:56.739043 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:18:56.739053 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:18:56.739064 | orchestrator | 2025-09-27 21:18:56.739074 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-09-27 21:18:56.739085 | orchestrator | Saturday 27 September 2025 21:18:39 +0000 (0:00:00.864) 0:07:35.856 **** 2025-09-27 21:18:56.739096 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-27 21:18:56.739109 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-27 21:18:56.739120 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-27 21:18:56.739130 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-27 21:18:56.739141 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-27 21:18:56.739165 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-27 21:18:56.739176 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-27 21:18:56.739186 | orchestrator | 2025-09-27 21:18:56.739197 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-09-27 21:18:56.739208 | orchestrator | Saturday 27 September 2025 21:18:40 +0000 (0:00:01.722) 0:07:37.579 **** 2025-09-27 21:18:56.739228 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:18:56.739239 | orchestrator | 2025-09-27 21:18:56.739251 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-09-27 21:18:56.739272 | orchestrator | Saturday 27 September 2025 21:18:41 +0000 (0:00:01.009) 0:07:38.588 **** 2025-09-27 21:18:56.739292 | orchestrator | changed: [testbed-manager] 2025-09-27 21:18:56.739311 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:18:56.739330 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:18:56.739348 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:18:56.739369 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:18:56.739387 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:18:56.739406 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:18:56.739427 | orchestrator | 2025-09-27 21:18:56.739447 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-09-27 21:18:56.739466 | orchestrator | Saturday 27 September 2025 21:18:51 +0000 (0:00:09.535) 0:07:48.124 **** 2025-09-27 21:18:56.739485 | orchestrator | ok: [testbed-manager] 2025-09-27 21:18:56.739505 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:18:56.739524 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:18:56.739544 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:18:56.739557 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:18:56.739567 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:18:56.739578 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:18:56.739588 | orchestrator | 2025-09-27 21:18:56.739599 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-09-27 21:18:56.739610 | orchestrator | Saturday 27 September 2025 21:18:53 +0000 (0:00:01.964) 0:07:50.089 **** 2025-09-27 21:18:56.739620 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:18:56.739630 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:18:56.739641 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:18:56.739651 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:18:56.739661 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:18:56.739671 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:18:56.739682 | orchestrator | 2025-09-27 21:18:56.739738 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-09-27 21:18:56.739758 | orchestrator | Saturday 27 September 2025 21:18:54 +0000 (0:00:01.362) 0:07:51.452 **** 2025-09-27 21:18:56.739777 | orchestrator | changed: [testbed-manager] 2025-09-27 21:18:56.739793 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:18:56.739804 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:18:56.739814 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:18:56.739825 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:18:56.739835 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:18:56.739846 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:18:56.739856 | orchestrator | 2025-09-27 21:18:56.739867 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-09-27 21:18:56.739877 | orchestrator | 2025-09-27 21:18:56.739888 | orchestrator | TASK [Include hardening role] ************************************************** 2025-09-27 21:18:56.739899 | orchestrator | Saturday 27 September 2025 21:18:56 +0000 (0:00:01.284) 0:07:52.737 **** 2025-09-27 21:18:56.739909 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:18:56.739920 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:18:56.739938 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:18:56.739949 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:18:56.739960 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:18:56.739970 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:18:56.739990 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:19:24.871427 | orchestrator | 2025-09-27 21:19:24.871537 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-09-27 21:19:24.871551 | orchestrator | 2025-09-27 21:19:24.871559 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-09-27 21:19:24.871595 | orchestrator | Saturday 27 September 2025 21:18:56 +0000 (0:00:00.587) 0:07:53.324 **** 2025-09-27 21:19:24.871603 | orchestrator | changed: [testbed-manager] 2025-09-27 21:19:24.871611 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:19:24.871619 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:19:24.871627 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:19:24.871635 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:19:24.871715 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:19:24.871723 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:19:24.871730 | orchestrator | 2025-09-27 21:19:24.871738 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-09-27 21:19:24.871745 | orchestrator | Saturday 27 September 2025 21:18:58 +0000 (0:00:01.565) 0:07:54.889 **** 2025-09-27 21:19:24.871753 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:19:24.871763 | orchestrator | ok: [testbed-manager] 2025-09-27 21:19:24.871771 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:19:24.871778 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:19:24.871786 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:19:24.871793 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:19:24.871801 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:19:24.871808 | orchestrator | 2025-09-27 21:19:24.871815 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-09-27 21:19:24.871823 | orchestrator | Saturday 27 September 2025 21:18:59 +0000 (0:00:01.580) 0:07:56.469 **** 2025-09-27 21:19:24.871831 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:19:24.871839 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:19:24.871846 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:19:24.871854 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:19:24.871862 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:19:24.871870 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:19:24.871878 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:19:24.871885 | orchestrator | 2025-09-27 21:19:24.871894 | orchestrator | TASK [Include smartd role] ***************************************************** 2025-09-27 21:19:24.871902 | orchestrator | Saturday 27 September 2025 21:19:00 +0000 (0:00:00.514) 0:07:56.983 **** 2025-09-27 21:19:24.871910 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:19:24.871920 | orchestrator | 2025-09-27 21:19:24.871928 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-09-27 21:19:24.871936 | orchestrator | Saturday 27 September 2025 21:19:01 +0000 (0:00:01.076) 0:07:58.060 **** 2025-09-27 21:19:24.871947 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:19:24.871958 | orchestrator | 2025-09-27 21:19:24.871966 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-09-27 21:19:24.871974 | orchestrator | Saturday 27 September 2025 21:19:02 +0000 (0:00:00.910) 0:07:58.970 **** 2025-09-27 21:19:24.871982 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:19:24.871990 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:19:24.871999 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:19:24.872006 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:19:24.872014 | orchestrator | changed: [testbed-manager] 2025-09-27 21:19:24.872021 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:19:24.872029 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:19:24.872037 | orchestrator | 2025-09-27 21:19:24.872045 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-09-27 21:19:24.872054 | orchestrator | Saturday 27 September 2025 21:19:11 +0000 (0:00:08.984) 0:08:07.955 **** 2025-09-27 21:19:24.872062 | orchestrator | changed: [testbed-manager] 2025-09-27 21:19:24.872071 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:19:24.872092 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:19:24.872101 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:19:24.872109 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:19:24.872117 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:19:24.872125 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:19:24.872133 | orchestrator | 2025-09-27 21:19:24.872142 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-09-27 21:19:24.872151 | orchestrator | Saturday 27 September 2025 21:19:12 +0000 (0:00:00.844) 0:08:08.799 **** 2025-09-27 21:19:24.872159 | orchestrator | changed: [testbed-manager] 2025-09-27 21:19:24.872167 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:19:24.872175 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:19:24.872183 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:19:24.872191 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:19:24.872199 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:19:24.872206 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:19:24.872214 | orchestrator | 2025-09-27 21:19:24.872223 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-09-27 21:19:24.872232 | orchestrator | Saturday 27 September 2025 21:19:13 +0000 (0:00:01.632) 0:08:10.432 **** 2025-09-27 21:19:24.872240 | orchestrator | changed: [testbed-manager] 2025-09-27 21:19:24.872248 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:19:24.872257 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:19:24.872265 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:19:24.872274 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:19:24.872282 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:19:24.872290 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:19:24.872299 | orchestrator | 2025-09-27 21:19:24.872307 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-09-27 21:19:24.872333 | orchestrator | Saturday 27 September 2025 21:19:15 +0000 (0:00:01.904) 0:08:12.336 **** 2025-09-27 21:19:24.872342 | orchestrator | changed: [testbed-manager] 2025-09-27 21:19:24.872350 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:19:24.872358 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:19:24.872367 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:19:24.872397 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:19:24.872406 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:19:24.872414 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:19:24.872422 | orchestrator | 2025-09-27 21:19:24.872430 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-09-27 21:19:24.872437 | orchestrator | Saturday 27 September 2025 21:19:16 +0000 (0:00:01.246) 0:08:13.583 **** 2025-09-27 21:19:24.872444 | orchestrator | changed: [testbed-manager] 2025-09-27 21:19:24.872452 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:19:24.872459 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:19:24.872466 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:19:24.872473 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:19:24.872480 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:19:24.872487 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:19:24.872495 | orchestrator | 2025-09-27 21:19:24.872503 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-09-27 21:19:24.872510 | orchestrator | 2025-09-27 21:19:24.872518 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-09-27 21:19:24.872525 | orchestrator | Saturday 27 September 2025 21:19:18 +0000 (0:00:01.466) 0:08:15.049 **** 2025-09-27 21:19:24.872532 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:19:24.872539 | orchestrator | 2025-09-27 21:19:24.872546 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-27 21:19:24.872553 | orchestrator | Saturday 27 September 2025 21:19:19 +0000 (0:00:00.848) 0:08:15.897 **** 2025-09-27 21:19:24.872559 | orchestrator | ok: [testbed-manager] 2025-09-27 21:19:24.872567 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:19:24.872584 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:19:24.872591 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:19:24.872598 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:19:24.872604 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:19:24.872611 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:19:24.872618 | orchestrator | 2025-09-27 21:19:24.872625 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-27 21:19:24.872633 | orchestrator | Saturday 27 September 2025 21:19:20 +0000 (0:00:00.878) 0:08:16.775 **** 2025-09-27 21:19:24.872667 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:19:24.872675 | orchestrator | changed: [testbed-manager] 2025-09-27 21:19:24.872682 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:19:24.872689 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:19:24.872696 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:19:24.872704 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:19:24.872710 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:19:24.872717 | orchestrator | 2025-09-27 21:19:24.872726 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-09-27 21:19:24.872734 | orchestrator | Saturday 27 September 2025 21:19:21 +0000 (0:00:01.454) 0:08:18.230 **** 2025-09-27 21:19:24.872742 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:19:24.872749 | orchestrator | 2025-09-27 21:19:24.872756 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-27 21:19:24.872763 | orchestrator | Saturday 27 September 2025 21:19:22 +0000 (0:00:00.936) 0:08:19.166 **** 2025-09-27 21:19:24.872770 | orchestrator | ok: [testbed-manager] 2025-09-27 21:19:24.872777 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:19:24.872784 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:19:24.872790 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:19:24.872798 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:19:24.872805 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:19:24.872812 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:19:24.872819 | orchestrator | 2025-09-27 21:19:24.872826 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-27 21:19:24.872834 | orchestrator | Saturday 27 September 2025 21:19:23 +0000 (0:00:00.860) 0:08:20.027 **** 2025-09-27 21:19:24.872840 | orchestrator | changed: [testbed-manager] 2025-09-27 21:19:24.872848 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:19:24.872854 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:19:24.872861 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:19:24.872869 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:19:24.872876 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:19:24.872883 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:19:24.872890 | orchestrator | 2025-09-27 21:19:24.872898 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:19:24.872906 | orchestrator | testbed-manager : ok=164  changed=38  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2025-09-27 21:19:24.872914 | orchestrator | testbed-node-0 : ok=173  changed=67  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-27 21:19:24.872922 | orchestrator | testbed-node-1 : ok=173  changed=67  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-27 21:19:24.872929 | orchestrator | testbed-node-2 : ok=173  changed=67  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-27 21:19:24.872936 | orchestrator | testbed-node-3 : ok=171  changed=63  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2025-09-27 21:19:24.872943 | orchestrator | testbed-node-4 : ok=171  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-09-27 21:19:24.872963 | orchestrator | testbed-node-5 : ok=171  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-09-27 21:19:24.872970 | orchestrator | 2025-09-27 21:19:24.872978 | orchestrator | 2025-09-27 21:19:24.872995 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:19:25.413203 | orchestrator | Saturday 27 September 2025 21:19:24 +0000 (0:00:01.416) 0:08:21.444 **** 2025-09-27 21:19:25.413373 | orchestrator | =============================================================================== 2025-09-27 21:19:25.413385 | orchestrator | osism.commons.packages : Install required packages --------------------- 77.39s 2025-09-27 21:19:25.413395 | orchestrator | osism.commons.packages : Download required packages -------------------- 48.42s 2025-09-27 21:19:25.413404 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.09s 2025-09-27 21:19:25.413413 | orchestrator | osism.commons.repository : Update package cache ------------------------ 18.08s 2025-09-27 21:19:25.413423 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.03s 2025-09-27 21:19:25.413433 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 10.61s 2025-09-27 21:19:25.413442 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.39s 2025-09-27 21:19:25.413453 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 10.31s 2025-09-27 21:19:25.413465 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.71s 2025-09-27 21:19:25.413475 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.54s 2025-09-27 21:19:25.413483 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.99s 2025-09-27 21:19:25.413489 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.98s 2025-09-27 21:19:25.413495 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.77s 2025-09-27 21:19:25.413501 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.45s 2025-09-27 21:19:25.413507 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.41s 2025-09-27 21:19:25.413513 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.09s 2025-09-27 21:19:25.413519 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 8.04s 2025-09-27 21:19:25.413524 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.18s 2025-09-27 21:19:25.413530 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.24s 2025-09-27 21:19:25.413536 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.93s 2025-09-27 21:19:25.771885 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-09-27 21:19:25.771993 | orchestrator | + osism apply network 2025-09-27 21:19:38.909156 | orchestrator | 2025-09-27 21:19:38 | INFO  | Task 76f6679d-cd35-4b47-923d-4a8ecead7ba1 (network) was prepared for execution. 2025-09-27 21:19:38.909279 | orchestrator | 2025-09-27 21:19:38 | INFO  | It takes a moment until task 76f6679d-cd35-4b47-923d-4a8ecead7ba1 (network) has been started and output is visible here. 2025-09-27 21:20:08.598951 | orchestrator | 2025-09-27 21:20:08.599053 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-09-27 21:20:08.599063 | orchestrator | 2025-09-27 21:20:08.599072 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-09-27 21:20:08.599080 | orchestrator | Saturday 27 September 2025 21:19:43 +0000 (0:00:00.250) 0:00:00.250 **** 2025-09-27 21:20:08.599087 | orchestrator | ok: [testbed-manager] 2025-09-27 21:20:08.599096 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:20:08.599103 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:20:08.599110 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:20:08.599117 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:20:08.599124 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:20:08.599131 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:20:08.599157 | orchestrator | 2025-09-27 21:20:08.599166 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-09-27 21:20:08.599173 | orchestrator | Saturday 27 September 2025 21:19:43 +0000 (0:00:00.686) 0:00:00.936 **** 2025-09-27 21:20:08.599182 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:20:08.599192 | orchestrator | 2025-09-27 21:20:08.599199 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-09-27 21:20:08.599206 | orchestrator | Saturday 27 September 2025 21:19:45 +0000 (0:00:01.283) 0:00:02.220 **** 2025-09-27 21:20:08.599213 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:20:08.599220 | orchestrator | ok: [testbed-manager] 2025-09-27 21:20:08.599227 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:20:08.599234 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:20:08.599241 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:20:08.599248 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:20:08.599255 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:20:08.599262 | orchestrator | 2025-09-27 21:20:08.599268 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-09-27 21:20:08.599275 | orchestrator | Saturday 27 September 2025 21:19:47 +0000 (0:00:02.141) 0:00:04.361 **** 2025-09-27 21:20:08.599282 | orchestrator | ok: [testbed-manager] 2025-09-27 21:20:08.599289 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:20:08.599296 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:20:08.599303 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:20:08.599310 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:20:08.599317 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:20:08.599324 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:20:08.599331 | orchestrator | 2025-09-27 21:20:08.599350 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-09-27 21:20:08.599357 | orchestrator | Saturday 27 September 2025 21:19:49 +0000 (0:00:01.696) 0:00:06.058 **** 2025-09-27 21:20:08.599365 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-09-27 21:20:08.599373 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-09-27 21:20:08.599380 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-09-27 21:20:08.599387 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-09-27 21:20:08.599394 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-09-27 21:20:08.599401 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-09-27 21:20:08.599408 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-09-27 21:20:08.599415 | orchestrator | 2025-09-27 21:20:08.599422 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-09-27 21:20:08.599429 | orchestrator | Saturday 27 September 2025 21:19:50 +0000 (0:00:01.023) 0:00:07.081 **** 2025-09-27 21:20:08.599436 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-27 21:20:08.599444 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-27 21:20:08.599451 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-27 21:20:08.599458 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-27 21:20:08.599465 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-27 21:20:08.599472 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-27 21:20:08.599479 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-27 21:20:08.599485 | orchestrator | 2025-09-27 21:20:08.599492 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-09-27 21:20:08.599500 | orchestrator | Saturday 27 September 2025 21:19:53 +0000 (0:00:03.464) 0:00:10.546 **** 2025-09-27 21:20:08.599507 | orchestrator | changed: [testbed-manager] 2025-09-27 21:20:08.599515 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:20:08.599523 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:20:08.599531 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:20:08.599539 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:20:08.599553 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:20:08.599585 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:20:08.599595 | orchestrator | 2025-09-27 21:20:08.599603 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-09-27 21:20:08.599611 | orchestrator | Saturday 27 September 2025 21:19:55 +0000 (0:00:01.519) 0:00:12.065 **** 2025-09-27 21:20:08.599619 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-27 21:20:08.599628 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-27 21:20:08.599635 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-27 21:20:08.599643 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-27 21:20:08.599651 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-27 21:20:08.599659 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-27 21:20:08.599667 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-27 21:20:08.599675 | orchestrator | 2025-09-27 21:20:08.599683 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-09-27 21:20:08.599691 | orchestrator | Saturday 27 September 2025 21:19:57 +0000 (0:00:02.167) 0:00:14.232 **** 2025-09-27 21:20:08.599699 | orchestrator | ok: [testbed-manager] 2025-09-27 21:20:08.599707 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:20:08.599715 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:20:08.599723 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:20:08.599731 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:20:08.599739 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:20:08.599746 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:20:08.599754 | orchestrator | 2025-09-27 21:20:08.599763 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-09-27 21:20:08.599784 | orchestrator | Saturday 27 September 2025 21:19:58 +0000 (0:00:01.108) 0:00:15.341 **** 2025-09-27 21:20:08.599793 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:20:08.599801 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:20:08.599809 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:20:08.599816 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:20:08.599824 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:20:08.599832 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:20:08.599840 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:20:08.599849 | orchestrator | 2025-09-27 21:20:08.599857 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-09-27 21:20:08.599865 | orchestrator | Saturday 27 September 2025 21:19:59 +0000 (0:00:00.664) 0:00:16.006 **** 2025-09-27 21:20:08.599872 | orchestrator | ok: [testbed-manager] 2025-09-27 21:20:08.599879 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:20:08.599886 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:20:08.599893 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:20:08.599900 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:20:08.599907 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:20:08.599914 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:20:08.599921 | orchestrator | 2025-09-27 21:20:08.599928 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-09-27 21:20:08.599935 | orchestrator | Saturday 27 September 2025 21:20:01 +0000 (0:00:02.462) 0:00:18.468 **** 2025-09-27 21:20:08.599942 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:20:08.599949 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:20:08.599956 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:20:08.599963 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:20:08.599970 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:20:08.599976 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:20:08.599984 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-09-27 21:20:08.599993 | orchestrator | 2025-09-27 21:20:08.600000 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-09-27 21:20:08.600007 | orchestrator | Saturday 27 September 2025 21:20:02 +0000 (0:00:00.935) 0:00:19.404 **** 2025-09-27 21:20:08.600014 | orchestrator | ok: [testbed-manager] 2025-09-27 21:20:08.600026 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:20:08.600033 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:20:08.600040 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:20:08.600046 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:20:08.600053 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:20:08.600060 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:20:08.600067 | orchestrator | 2025-09-27 21:20:08.600074 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-09-27 21:20:08.600085 | orchestrator | Saturday 27 September 2025 21:20:04 +0000 (0:00:01.696) 0:00:21.101 **** 2025-09-27 21:20:08.600093 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:20:08.600102 | orchestrator | 2025-09-27 21:20:08.600109 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-27 21:20:08.600116 | orchestrator | Saturday 27 September 2025 21:20:05 +0000 (0:00:01.358) 0:00:22.460 **** 2025-09-27 21:20:08.600123 | orchestrator | ok: [testbed-manager] 2025-09-27 21:20:08.600130 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:20:08.600137 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:20:08.600144 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:20:08.600151 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:20:08.600158 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:20:08.600165 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:20:08.600172 | orchestrator | 2025-09-27 21:20:08.600179 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-09-27 21:20:08.600186 | orchestrator | Saturday 27 September 2025 21:20:06 +0000 (0:00:01.008) 0:00:23.468 **** 2025-09-27 21:20:08.600193 | orchestrator | ok: [testbed-manager] 2025-09-27 21:20:08.600200 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:20:08.600207 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:20:08.600214 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:20:08.600221 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:20:08.600228 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:20:08.600234 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:20:08.600241 | orchestrator | 2025-09-27 21:20:08.600248 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-27 21:20:08.600255 | orchestrator | Saturday 27 September 2025 21:20:07 +0000 (0:00:00.872) 0:00:24.341 **** 2025-09-27 21:20:08.600262 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-09-27 21:20:08.600269 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-09-27 21:20:08.600276 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-09-27 21:20:08.600283 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-09-27 21:20:08.600290 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-27 21:20:08.600297 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-09-27 21:20:08.600304 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-27 21:20:08.600311 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-09-27 21:20:08.600318 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-27 21:20:08.600325 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-09-27 21:20:08.600332 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-27 21:20:08.600339 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-27 21:20:08.600346 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-27 21:20:08.600353 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-27 21:20:08.600360 | orchestrator | 2025-09-27 21:20:08.600371 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-09-27 21:20:26.133051 | orchestrator | Saturday 27 September 2025 21:20:08 +0000 (0:00:01.240) 0:00:25.582 **** 2025-09-27 21:20:26.133214 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:20:26.133234 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:20:26.133246 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:20:26.133257 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:20:26.133268 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:20:26.133279 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:20:26.133289 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:20:26.133306 | orchestrator | 2025-09-27 21:20:26.133331 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-09-27 21:20:26.133356 | orchestrator | Saturday 27 September 2025 21:20:09 +0000 (0:00:00.666) 0:00:26.249 **** 2025-09-27 21:20:26.133376 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-1, testbed-node-3, testbed-node-0, testbed-node-2, testbed-node-5, testbed-node-4 2025-09-27 21:20:26.133398 | orchestrator | 2025-09-27 21:20:26.133418 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-09-27 21:20:26.133436 | orchestrator | Saturday 27 September 2025 21:20:13 +0000 (0:00:04.667) 0:00:30.916 **** 2025-09-27 21:20:26.133457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-27 21:20:26.133471 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-27 21:20:26.133498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-27 21:20:26.133509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-27 21:20:26.133521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-27 21:20:26.133572 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-27 21:20:26.133586 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-27 21:20:26.133600 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-27 21:20:26.133618 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-27 21:20:26.133637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-27 21:20:26.133689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-27 21:20:26.133734 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-27 21:20:26.133754 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-27 21:20:26.133766 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-27 21:20:26.133777 | orchestrator | 2025-09-27 21:20:26.133787 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-09-27 21:20:26.133798 | orchestrator | Saturday 27 September 2025 21:20:19 +0000 (0:00:05.981) 0:00:36.898 **** 2025-09-27 21:20:26.133809 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-27 21:20:26.133820 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-27 21:20:26.133831 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-27 21:20:26.133848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-27 21:20:26.133859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-27 21:20:26.133870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-27 21:20:26.133881 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-27 21:20:26.133892 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-27 21:20:26.133911 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-27 21:20:26.133922 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-27 21:20:26.133933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-27 21:20:26.133944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-27 21:20:26.133963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-27 21:20:32.761061 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-27 21:20:32.761176 | orchestrator | 2025-09-27 21:20:32.761193 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-09-27 21:20:32.761207 | orchestrator | Saturday 27 September 2025 21:20:26 +0000 (0:00:06.210) 0:00:43.108 **** 2025-09-27 21:20:32.761220 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:20:32.761232 | orchestrator | 2025-09-27 21:20:32.761243 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-27 21:20:32.761254 | orchestrator | Saturday 27 September 2025 21:20:27 +0000 (0:00:01.319) 0:00:44.427 **** 2025-09-27 21:20:32.761316 | orchestrator | ok: [testbed-manager] 2025-09-27 21:20:32.761343 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:20:32.761364 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:20:32.761383 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:20:32.761402 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:20:32.761417 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:20:32.761428 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:20:32.761438 | orchestrator | 2025-09-27 21:20:32.761450 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-27 21:20:32.761461 | orchestrator | Saturday 27 September 2025 21:20:28 +0000 (0:00:01.238) 0:00:45.666 **** 2025-09-27 21:20:32.761473 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-27 21:20:32.761485 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-27 21:20:32.761495 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-27 21:20:32.761584 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-27 21:20:32.761600 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:20:32.761613 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-27 21:20:32.761624 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-27 21:20:32.761636 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-27 21:20:32.761673 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-27 21:20:32.761686 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:20:32.761698 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-27 21:20:32.761710 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-27 21:20:32.761722 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-27 21:20:32.761734 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-27 21:20:32.761746 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:20:32.761758 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-27 21:20:32.761770 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-27 21:20:32.761790 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-27 21:20:32.761816 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-27 21:20:32.761842 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:20:32.761860 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-27 21:20:32.761879 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-27 21:20:32.761896 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-27 21:20:32.761915 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-27 21:20:32.761933 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:20:32.761952 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-27 21:20:32.761971 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-27 21:20:32.761990 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-27 21:20:32.762009 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-27 21:20:32.762078 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:20:32.762097 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-27 21:20:32.762116 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-27 21:20:32.762175 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-27 21:20:32.762196 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-27 21:20:32.762213 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:20:32.762224 | orchestrator | 2025-09-27 21:20:32.762235 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-09-27 21:20:32.762271 | orchestrator | Saturday 27 September 2025 21:20:30 +0000 (0:00:02.184) 0:00:47.851 **** 2025-09-27 21:20:32.762298 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:20:32.762321 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:20:32.762338 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:20:32.762357 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:20:32.762375 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:20:32.762394 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:20:32.762413 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:20:32.762431 | orchestrator | 2025-09-27 21:20:32.762442 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-09-27 21:20:32.762453 | orchestrator | Saturday 27 September 2025 21:20:31 +0000 (0:00:00.711) 0:00:48.562 **** 2025-09-27 21:20:32.762463 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:20:32.762474 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:20:32.762484 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:20:32.762509 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:20:32.762520 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:20:32.762560 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:20:32.762571 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:20:32.762582 | orchestrator | 2025-09-27 21:20:32.762593 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:20:32.762605 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-27 21:20:32.762617 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-27 21:20:32.762628 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-27 21:20:32.762639 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-27 21:20:32.762658 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-27 21:20:32.762669 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-27 21:20:32.762680 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-27 21:20:32.762690 | orchestrator | 2025-09-27 21:20:32.762702 | orchestrator | 2025-09-27 21:20:32.762713 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:20:32.762723 | orchestrator | Saturday 27 September 2025 21:20:32 +0000 (0:00:00.763) 0:00:49.326 **** 2025-09-27 21:20:32.762734 | orchestrator | =============================================================================== 2025-09-27 21:20:32.762745 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.21s 2025-09-27 21:20:32.762755 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.98s 2025-09-27 21:20:32.762766 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.67s 2025-09-27 21:20:32.762776 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.46s 2025-09-27 21:20:32.762787 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.46s 2025-09-27 21:20:32.762797 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.18s 2025-09-27 21:20:32.762808 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 2.17s 2025-09-27 21:20:32.762818 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.14s 2025-09-27 21:20:32.762829 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.70s 2025-09-27 21:20:32.762839 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.70s 2025-09-27 21:20:32.762850 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.52s 2025-09-27 21:20:32.762860 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.36s 2025-09-27 21:20:32.762871 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.32s 2025-09-27 21:20:32.762881 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.28s 2025-09-27 21:20:32.762892 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.24s 2025-09-27 21:20:32.762902 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.24s 2025-09-27 21:20:32.762913 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.11s 2025-09-27 21:20:32.762923 | orchestrator | osism.commons.network : Create required directories --------------------- 1.02s 2025-09-27 21:20:32.762942 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.01s 2025-09-27 21:20:32.762952 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.94s 2025-09-27 21:20:33.089744 | orchestrator | + osism apply wireguard 2025-09-27 21:20:45.294545 | orchestrator | 2025-09-27 21:20:45 | INFO  | Task 2402c922-b325-407f-a093-f1bea9186709 (wireguard) was prepared for execution. 2025-09-27 21:20:45.294655 | orchestrator | 2025-09-27 21:20:45 | INFO  | It takes a moment until task 2402c922-b325-407f-a093-f1bea9186709 (wireguard) has been started and output is visible here. 2025-09-27 21:21:06.101720 | orchestrator | 2025-09-27 21:21:06.101858 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-09-27 21:21:06.101876 | orchestrator | 2025-09-27 21:21:06.101888 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-09-27 21:21:06.101899 | orchestrator | Saturday 27 September 2025 21:20:49 +0000 (0:00:00.255) 0:00:00.255 **** 2025-09-27 21:21:06.101911 | orchestrator | ok: [testbed-manager] 2025-09-27 21:21:06.101922 | orchestrator | 2025-09-27 21:21:06.101933 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-09-27 21:21:06.101944 | orchestrator | Saturday 27 September 2025 21:20:51 +0000 (0:00:01.747) 0:00:02.003 **** 2025-09-27 21:21:06.101955 | orchestrator | changed: [testbed-manager] 2025-09-27 21:21:06.101967 | orchestrator | 2025-09-27 21:21:06.101978 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-09-27 21:21:06.101988 | orchestrator | Saturday 27 September 2025 21:20:58 +0000 (0:00:07.167) 0:00:09.170 **** 2025-09-27 21:21:06.101999 | orchestrator | changed: [testbed-manager] 2025-09-27 21:21:06.102010 | orchestrator | 2025-09-27 21:21:06.102079 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-09-27 21:21:06.102091 | orchestrator | Saturday 27 September 2025 21:20:59 +0000 (0:00:00.569) 0:00:09.739 **** 2025-09-27 21:21:06.102102 | orchestrator | changed: [testbed-manager] 2025-09-27 21:21:06.102113 | orchestrator | 2025-09-27 21:21:06.102124 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-09-27 21:21:06.102135 | orchestrator | Saturday 27 September 2025 21:20:59 +0000 (0:00:00.436) 0:00:10.176 **** 2025-09-27 21:21:06.102146 | orchestrator | ok: [testbed-manager] 2025-09-27 21:21:06.102156 | orchestrator | 2025-09-27 21:21:06.102167 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-09-27 21:21:06.102178 | orchestrator | Saturday 27 September 2025 21:21:00 +0000 (0:00:00.508) 0:00:10.685 **** 2025-09-27 21:21:06.102189 | orchestrator | ok: [testbed-manager] 2025-09-27 21:21:06.102200 | orchestrator | 2025-09-27 21:21:06.102212 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-09-27 21:21:06.102222 | orchestrator | Saturday 27 September 2025 21:21:00 +0000 (0:00:00.545) 0:00:11.230 **** 2025-09-27 21:21:06.102233 | orchestrator | ok: [testbed-manager] 2025-09-27 21:21:06.102244 | orchestrator | 2025-09-27 21:21:06.102275 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-09-27 21:21:06.102289 | orchestrator | Saturday 27 September 2025 21:21:01 +0000 (0:00:00.447) 0:00:11.678 **** 2025-09-27 21:21:06.102301 | orchestrator | changed: [testbed-manager] 2025-09-27 21:21:06.102313 | orchestrator | 2025-09-27 21:21:06.102326 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-09-27 21:21:06.102339 | orchestrator | Saturday 27 September 2025 21:21:02 +0000 (0:00:01.183) 0:00:12.862 **** 2025-09-27 21:21:06.102353 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-27 21:21:06.102366 | orchestrator | changed: [testbed-manager] 2025-09-27 21:21:06.102378 | orchestrator | 2025-09-27 21:21:06.102391 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-09-27 21:21:06.102403 | orchestrator | Saturday 27 September 2025 21:21:03 +0000 (0:00:00.930) 0:00:13.792 **** 2025-09-27 21:21:06.102416 | orchestrator | changed: [testbed-manager] 2025-09-27 21:21:06.102428 | orchestrator | 2025-09-27 21:21:06.102441 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-09-27 21:21:06.102500 | orchestrator | Saturday 27 September 2025 21:21:04 +0000 (0:00:01.691) 0:00:15.484 **** 2025-09-27 21:21:06.102513 | orchestrator | changed: [testbed-manager] 2025-09-27 21:21:06.102526 | orchestrator | 2025-09-27 21:21:06.102539 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:21:06.102552 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:21:06.102565 | orchestrator | 2025-09-27 21:21:06.102577 | orchestrator | 2025-09-27 21:21:06.102589 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:21:06.102603 | orchestrator | Saturday 27 September 2025 21:21:05 +0000 (0:00:00.963) 0:00:16.447 **** 2025-09-27 21:21:06.102615 | orchestrator | =============================================================================== 2025-09-27 21:21:06.102627 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.17s 2025-09-27 21:21:06.102638 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.75s 2025-09-27 21:21:06.102649 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.69s 2025-09-27 21:21:06.102660 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.18s 2025-09-27 21:21:06.102670 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.96s 2025-09-27 21:21:06.102681 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.93s 2025-09-27 21:21:06.102692 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.57s 2025-09-27 21:21:06.102702 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.55s 2025-09-27 21:21:06.102713 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.51s 2025-09-27 21:21:06.102724 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.45s 2025-09-27 21:21:06.102734 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.44s 2025-09-27 21:21:06.386933 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-09-27 21:21:06.430131 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-09-27 21:21:06.430206 | orchestrator | Dload Upload Total Spent Left Speed 2025-09-27 21:21:06.505107 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 199 0 --:--:-- --:--:-- --:--:-- 202 2025-09-27 21:21:06.517306 | orchestrator | + osism apply --environment custom workarounds 2025-09-27 21:21:08.407224 | orchestrator | 2025-09-27 21:21:08 | INFO  | Trying to run play workarounds in environment custom 2025-09-27 21:21:18.539736 | orchestrator | 2025-09-27 21:21:18 | INFO  | Task 2d057493-a5d2-4369-a49c-f3214eec9db6 (workarounds) was prepared for execution. 2025-09-27 21:21:18.539853 | orchestrator | 2025-09-27 21:21:18 | INFO  | It takes a moment until task 2d057493-a5d2-4369-a49c-f3214eec9db6 (workarounds) has been started and output is visible here. 2025-09-27 21:21:43.785503 | orchestrator | 2025-09-27 21:21:43.785610 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 21:21:43.785626 | orchestrator | 2025-09-27 21:21:43.785637 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-09-27 21:21:43.785648 | orchestrator | Saturday 27 September 2025 21:21:22 +0000 (0:00:00.143) 0:00:00.143 **** 2025-09-27 21:21:43.785660 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-09-27 21:21:43.785671 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-09-27 21:21:43.785682 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-09-27 21:21:43.785692 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-09-27 21:21:43.785703 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-09-27 21:21:43.785772 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-09-27 21:21:43.785785 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-09-27 21:21:43.785796 | orchestrator | 2025-09-27 21:21:43.785807 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-09-27 21:21:43.785817 | orchestrator | 2025-09-27 21:21:43.785828 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-27 21:21:43.785839 | orchestrator | Saturday 27 September 2025 21:21:23 +0000 (0:00:00.754) 0:00:00.898 **** 2025-09-27 21:21:43.785850 | orchestrator | ok: [testbed-manager] 2025-09-27 21:21:43.785862 | orchestrator | 2025-09-27 21:21:43.785887 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-09-27 21:21:43.785898 | orchestrator | 2025-09-27 21:21:43.785909 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-27 21:21:43.785919 | orchestrator | Saturday 27 September 2025 21:21:25 +0000 (0:00:02.241) 0:00:03.140 **** 2025-09-27 21:21:43.785930 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:21:43.785941 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:21:43.785951 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:21:43.785961 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:21:43.785972 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:21:43.785982 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:21:43.785992 | orchestrator | 2025-09-27 21:21:43.786003 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-09-27 21:21:43.786050 | orchestrator | 2025-09-27 21:21:43.786066 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-09-27 21:21:43.786078 | orchestrator | Saturday 27 September 2025 21:21:27 +0000 (0:00:01.877) 0:00:05.018 **** 2025-09-27 21:21:43.786091 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-27 21:21:43.786104 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-27 21:21:43.786116 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-27 21:21:43.786128 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-27 21:21:43.786140 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-27 21:21:43.786151 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-27 21:21:43.786163 | orchestrator | 2025-09-27 21:21:43.786175 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-09-27 21:21:43.786187 | orchestrator | Saturday 27 September 2025 21:21:28 +0000 (0:00:01.487) 0:00:06.505 **** 2025-09-27 21:21:43.786199 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:21:43.786211 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:21:43.786223 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:21:43.786235 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:21:43.786246 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:21:43.786259 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:21:43.786270 | orchestrator | 2025-09-27 21:21:43.786282 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-09-27 21:21:43.786294 | orchestrator | Saturday 27 September 2025 21:21:32 +0000 (0:00:03.797) 0:00:10.303 **** 2025-09-27 21:21:43.786306 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:21:43.786317 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:21:43.786329 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:21:43.786341 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:21:43.786354 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:21:43.786365 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:21:43.786376 | orchestrator | 2025-09-27 21:21:43.786387 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-09-27 21:21:43.786406 | orchestrator | 2025-09-27 21:21:43.786434 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-09-27 21:21:43.786445 | orchestrator | Saturday 27 September 2025 21:21:33 +0000 (0:00:00.657) 0:00:10.961 **** 2025-09-27 21:21:43.786456 | orchestrator | changed: [testbed-manager] 2025-09-27 21:21:43.786466 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:21:43.786477 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:21:43.786487 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:21:43.786498 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:21:43.786509 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:21:43.786519 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:21:43.786530 | orchestrator | 2025-09-27 21:21:43.786540 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-09-27 21:21:43.786551 | orchestrator | Saturday 27 September 2025 21:21:34 +0000 (0:00:01.689) 0:00:12.650 **** 2025-09-27 21:21:43.786561 | orchestrator | changed: [testbed-manager] 2025-09-27 21:21:43.786572 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:21:43.786582 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:21:43.786593 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:21:43.786604 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:21:43.786614 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:21:43.786642 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:21:43.786654 | orchestrator | 2025-09-27 21:21:43.786665 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-09-27 21:21:43.786676 | orchestrator | Saturday 27 September 2025 21:21:36 +0000 (0:00:01.677) 0:00:14.327 **** 2025-09-27 21:21:43.786686 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:21:43.786697 | orchestrator | ok: [testbed-manager] 2025-09-27 21:21:43.786707 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:21:43.786718 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:21:43.786729 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:21:43.786739 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:21:43.786750 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:21:43.786760 | orchestrator | 2025-09-27 21:21:43.786771 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-09-27 21:21:43.786782 | orchestrator | Saturday 27 September 2025 21:21:38 +0000 (0:00:01.489) 0:00:15.817 **** 2025-09-27 21:21:43.786792 | orchestrator | changed: [testbed-manager] 2025-09-27 21:21:43.786803 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:21:43.786813 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:21:43.786824 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:21:43.786834 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:21:43.786845 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:21:43.786856 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:21:43.786866 | orchestrator | 2025-09-27 21:21:43.786877 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-09-27 21:21:43.786887 | orchestrator | Saturday 27 September 2025 21:21:40 +0000 (0:00:01.922) 0:00:17.739 **** 2025-09-27 21:21:43.786898 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:21:43.786909 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:21:43.786919 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:21:43.786930 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:21:43.786940 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:21:43.786957 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:21:43.786976 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:21:43.786995 | orchestrator | 2025-09-27 21:21:43.787015 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-09-27 21:21:43.787034 | orchestrator | 2025-09-27 21:21:43.787053 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-09-27 21:21:43.787071 | orchestrator | Saturday 27 September 2025 21:21:40 +0000 (0:00:00.625) 0:00:18.364 **** 2025-09-27 21:21:43.787090 | orchestrator | ok: [testbed-manager] 2025-09-27 21:21:43.787110 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:21:43.787138 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:21:43.787152 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:21:43.787163 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:21:43.787174 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:21:43.787185 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:21:43.787195 | orchestrator | 2025-09-27 21:21:43.787206 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:21:43.787218 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-27 21:21:43.787230 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:21:43.787240 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:21:43.787251 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:21:43.787262 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:21:43.787272 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:21:43.787283 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:21:43.787293 | orchestrator | 2025-09-27 21:21:43.787304 | orchestrator | 2025-09-27 21:21:43.787314 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:21:43.787325 | orchestrator | Saturday 27 September 2025 21:21:43 +0000 (0:00:03.108) 0:00:21.473 **** 2025-09-27 21:21:43.787336 | orchestrator | =============================================================================== 2025-09-27 21:21:43.787346 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.80s 2025-09-27 21:21:43.787357 | orchestrator | Install python3-docker -------------------------------------------------- 3.11s 2025-09-27 21:21:43.787368 | orchestrator | Apply netplan configuration --------------------------------------------- 2.24s 2025-09-27 21:21:43.787378 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.92s 2025-09-27 21:21:43.787389 | orchestrator | Apply netplan configuration --------------------------------------------- 1.88s 2025-09-27 21:21:43.787400 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.69s 2025-09-27 21:21:43.787410 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.68s 2025-09-27 21:21:43.787456 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.49s 2025-09-27 21:21:43.787467 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.49s 2025-09-27 21:21:43.787478 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.75s 2025-09-27 21:21:43.787499 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.66s 2025-09-27 21:21:43.787519 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.63s 2025-09-27 21:21:44.308156 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-09-27 21:21:56.234691 | orchestrator | 2025-09-27 21:21:56 | INFO  | Task 8b483de4-2937-4f8e-a42d-f3a4ba53dd35 (reboot) was prepared for execution. 2025-09-27 21:21:56.234801 | orchestrator | 2025-09-27 21:21:56 | INFO  | It takes a moment until task 8b483de4-2937-4f8e-a42d-f3a4ba53dd35 (reboot) has been started and output is visible here. 2025-09-27 21:22:05.453968 | orchestrator | 2025-09-27 21:22:05.454120 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-27 21:22:05.454188 | orchestrator | 2025-09-27 21:22:05.454201 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-27 21:22:05.454211 | orchestrator | Saturday 27 September 2025 21:21:59 +0000 (0:00:00.153) 0:00:00.153 **** 2025-09-27 21:22:05.454221 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:22:05.454231 | orchestrator | 2025-09-27 21:22:05.454242 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-27 21:22:05.454251 | orchestrator | Saturday 27 September 2025 21:21:59 +0000 (0:00:00.082) 0:00:00.236 **** 2025-09-27 21:22:05.454261 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:22:05.454270 | orchestrator | 2025-09-27 21:22:05.454280 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-27 21:22:05.454300 | orchestrator | Saturday 27 September 2025 21:22:00 +0000 (0:00:00.858) 0:00:01.094 **** 2025-09-27 21:22:05.454310 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:22:05.454320 | orchestrator | 2025-09-27 21:22:05.454329 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-27 21:22:05.454339 | orchestrator | 2025-09-27 21:22:05.454348 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-27 21:22:05.454358 | orchestrator | Saturday 27 September 2025 21:22:00 +0000 (0:00:00.106) 0:00:01.200 **** 2025-09-27 21:22:05.454368 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:22:05.454377 | orchestrator | 2025-09-27 21:22:05.454422 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-27 21:22:05.454433 | orchestrator | Saturday 27 September 2025 21:22:01 +0000 (0:00:00.086) 0:00:01.286 **** 2025-09-27 21:22:05.454442 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:22:05.454452 | orchestrator | 2025-09-27 21:22:05.454461 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-27 21:22:05.454471 | orchestrator | Saturday 27 September 2025 21:22:01 +0000 (0:00:00.654) 0:00:01.940 **** 2025-09-27 21:22:05.454480 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:22:05.454490 | orchestrator | 2025-09-27 21:22:05.454500 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-27 21:22:05.454509 | orchestrator | 2025-09-27 21:22:05.454520 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-27 21:22:05.454532 | orchestrator | Saturday 27 September 2025 21:22:01 +0000 (0:00:00.094) 0:00:02.035 **** 2025-09-27 21:22:05.454542 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:22:05.454554 | orchestrator | 2025-09-27 21:22:05.454565 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-27 21:22:05.454576 | orchestrator | Saturday 27 September 2025 21:22:01 +0000 (0:00:00.140) 0:00:02.176 **** 2025-09-27 21:22:05.454587 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:22:05.454598 | orchestrator | 2025-09-27 21:22:05.454609 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-27 21:22:05.454620 | orchestrator | Saturday 27 September 2025 21:22:02 +0000 (0:00:00.653) 0:00:02.829 **** 2025-09-27 21:22:05.454631 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:22:05.454642 | orchestrator | 2025-09-27 21:22:05.454652 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-27 21:22:05.454663 | orchestrator | 2025-09-27 21:22:05.454674 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-27 21:22:05.454685 | orchestrator | Saturday 27 September 2025 21:22:02 +0000 (0:00:00.110) 0:00:02.940 **** 2025-09-27 21:22:05.454696 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:22:05.454707 | orchestrator | 2025-09-27 21:22:05.454717 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-27 21:22:05.454728 | orchestrator | Saturday 27 September 2025 21:22:02 +0000 (0:00:00.087) 0:00:03.027 **** 2025-09-27 21:22:05.454739 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:22:05.454751 | orchestrator | 2025-09-27 21:22:05.454761 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-27 21:22:05.454772 | orchestrator | Saturday 27 September 2025 21:22:03 +0000 (0:00:00.655) 0:00:03.683 **** 2025-09-27 21:22:05.454791 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:22:05.454802 | orchestrator | 2025-09-27 21:22:05.454813 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-27 21:22:05.454824 | orchestrator | 2025-09-27 21:22:05.454835 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-27 21:22:05.454846 | orchestrator | Saturday 27 September 2025 21:22:03 +0000 (0:00:00.117) 0:00:03.801 **** 2025-09-27 21:22:05.454857 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:22:05.454867 | orchestrator | 2025-09-27 21:22:05.454878 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-27 21:22:05.454888 | orchestrator | Saturday 27 September 2025 21:22:03 +0000 (0:00:00.102) 0:00:03.904 **** 2025-09-27 21:22:05.454897 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:22:05.454907 | orchestrator | 2025-09-27 21:22:05.454916 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-27 21:22:05.454926 | orchestrator | Saturday 27 September 2025 21:22:04 +0000 (0:00:00.666) 0:00:04.570 **** 2025-09-27 21:22:05.454935 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:22:05.454945 | orchestrator | 2025-09-27 21:22:05.454954 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-27 21:22:05.454964 | orchestrator | 2025-09-27 21:22:05.454973 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-27 21:22:05.454983 | orchestrator | Saturday 27 September 2025 21:22:04 +0000 (0:00:00.105) 0:00:04.676 **** 2025-09-27 21:22:05.454992 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:22:05.455001 | orchestrator | 2025-09-27 21:22:05.455011 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-27 21:22:05.455021 | orchestrator | Saturday 27 September 2025 21:22:04 +0000 (0:00:00.109) 0:00:04.785 **** 2025-09-27 21:22:05.455030 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:22:05.455040 | orchestrator | 2025-09-27 21:22:05.455049 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-27 21:22:05.455059 | orchestrator | Saturday 27 September 2025 21:22:05 +0000 (0:00:00.639) 0:00:05.424 **** 2025-09-27 21:22:05.455084 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:22:05.455095 | orchestrator | 2025-09-27 21:22:05.455105 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:22:05.455115 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:22:05.455126 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:22:05.455136 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:22:05.455150 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:22:05.455160 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:22:05.455169 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:22:05.455179 | orchestrator | 2025-09-27 21:22:05.455188 | orchestrator | 2025-09-27 21:22:05.455198 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:22:05.455208 | orchestrator | Saturday 27 September 2025 21:22:05 +0000 (0:00:00.034) 0:00:05.459 **** 2025-09-27 21:22:05.455217 | orchestrator | =============================================================================== 2025-09-27 21:22:05.455227 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.13s 2025-09-27 21:22:05.455236 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.61s 2025-09-27 21:22:05.455254 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.57s 2025-09-27 21:22:05.711015 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-09-27 21:22:17.658297 | orchestrator | 2025-09-27 21:22:17 | INFO  | Task d45a23b0-e0fd-4fda-9246-eebba6b51178 (wait-for-connection) was prepared for execution. 2025-09-27 21:22:17.658468 | orchestrator | 2025-09-27 21:22:17 | INFO  | It takes a moment until task d45a23b0-e0fd-4fda-9246-eebba6b51178 (wait-for-connection) has been started and output is visible here. 2025-09-27 21:22:33.283497 | orchestrator | 2025-09-27 21:22:33.283613 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-09-27 21:22:33.283630 | orchestrator | 2025-09-27 21:22:33.283642 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-09-27 21:22:33.283654 | orchestrator | Saturday 27 September 2025 21:22:21 +0000 (0:00:00.195) 0:00:00.195 **** 2025-09-27 21:22:33.283665 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:22:33.283677 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:22:33.283688 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:22:33.283698 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:22:33.283708 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:22:33.283719 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:22:33.283729 | orchestrator | 2025-09-27 21:22:33.283740 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:22:33.283752 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:22:33.283764 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:22:33.283775 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:22:33.283786 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:22:33.283797 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:22:33.283807 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:22:33.283818 | orchestrator | 2025-09-27 21:22:33.283829 | orchestrator | 2025-09-27 21:22:33.283839 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:22:33.283850 | orchestrator | Saturday 27 September 2025 21:22:32 +0000 (0:00:11.483) 0:00:11.679 **** 2025-09-27 21:22:33.283860 | orchestrator | =============================================================================== 2025-09-27 21:22:33.283871 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.48s 2025-09-27 21:22:33.550969 | orchestrator | + osism apply hddtemp 2025-09-27 21:22:45.562927 | orchestrator | 2025-09-27 21:22:45 | INFO  | Task 79d77f27-07ec-48ce-bbfd-12311ad7d77c (hddtemp) was prepared for execution. 2025-09-27 21:22:45.563021 | orchestrator | 2025-09-27 21:22:45 | INFO  | It takes a moment until task 79d77f27-07ec-48ce-bbfd-12311ad7d77c (hddtemp) has been started and output is visible here. 2025-09-27 21:23:12.618370 | orchestrator | 2025-09-27 21:23:12.618471 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-09-27 21:23:12.618489 | orchestrator | 2025-09-27 21:23:12.618503 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-09-27 21:23:12.618514 | orchestrator | Saturday 27 September 2025 21:22:49 +0000 (0:00:00.194) 0:00:00.194 **** 2025-09-27 21:23:12.618526 | orchestrator | ok: [testbed-manager] 2025-09-27 21:23:12.618539 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:23:12.618550 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:23:12.618582 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:23:12.618594 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:23:12.618604 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:23:12.618615 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:23:12.618626 | orchestrator | 2025-09-27 21:23:12.618637 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-09-27 21:23:12.618648 | orchestrator | Saturday 27 September 2025 21:22:49 +0000 (0:00:00.502) 0:00:00.697 **** 2025-09-27 21:23:12.618669 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:23:12.618682 | orchestrator | 2025-09-27 21:23:12.618694 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-09-27 21:23:12.618705 | orchestrator | Saturday 27 September 2025 21:22:50 +0000 (0:00:00.860) 0:00:01.557 **** 2025-09-27 21:23:12.618715 | orchestrator | ok: [testbed-manager] 2025-09-27 21:23:12.618726 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:23:12.618737 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:23:12.618747 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:23:12.618758 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:23:12.618769 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:23:12.618780 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:23:12.618790 | orchestrator | 2025-09-27 21:23:12.618801 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-09-27 21:23:12.618812 | orchestrator | Saturday 27 September 2025 21:22:52 +0000 (0:00:02.089) 0:00:03.647 **** 2025-09-27 21:23:12.618823 | orchestrator | changed: [testbed-manager] 2025-09-27 21:23:12.618835 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:23:12.618846 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:23:12.618856 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:23:12.618867 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:23:12.618877 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:23:12.618888 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:23:12.618898 | orchestrator | 2025-09-27 21:23:12.618910 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-09-27 21:23:12.618921 | orchestrator | Saturday 27 September 2025 21:22:53 +0000 (0:00:00.965) 0:00:04.613 **** 2025-09-27 21:23:12.618932 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:23:12.618942 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:23:12.618953 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:23:12.618964 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:23:12.618974 | orchestrator | ok: [testbed-manager] 2025-09-27 21:23:12.618985 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:23:12.618996 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:23:12.619007 | orchestrator | 2025-09-27 21:23:12.619018 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-09-27 21:23:12.619029 | orchestrator | Saturday 27 September 2025 21:22:54 +0000 (0:00:01.165) 0:00:05.779 **** 2025-09-27 21:23:12.619039 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:23:12.619050 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:23:12.619061 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:23:12.619071 | orchestrator | changed: [testbed-manager] 2025-09-27 21:23:12.619082 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:23:12.619093 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:23:12.619103 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:23:12.619114 | orchestrator | 2025-09-27 21:23:12.619125 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-09-27 21:23:12.619136 | orchestrator | Saturday 27 September 2025 21:22:55 +0000 (0:00:00.655) 0:00:06.435 **** 2025-09-27 21:23:12.619146 | orchestrator | changed: [testbed-manager] 2025-09-27 21:23:12.619157 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:23:12.619168 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:23:12.619179 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:23:12.619199 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:23:12.619210 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:23:12.619220 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:23:12.619231 | orchestrator | 2025-09-27 21:23:12.619242 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-09-27 21:23:12.619253 | orchestrator | Saturday 27 September 2025 21:23:08 +0000 (0:00:13.374) 0:00:19.809 **** 2025-09-27 21:23:12.619264 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:23:12.619275 | orchestrator | 2025-09-27 21:23:12.619307 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-09-27 21:23:12.619318 | orchestrator | Saturday 27 September 2025 21:23:10 +0000 (0:00:01.419) 0:00:21.229 **** 2025-09-27 21:23:12.619328 | orchestrator | changed: [testbed-manager] 2025-09-27 21:23:12.619339 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:23:12.619350 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:23:12.619361 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:23:12.619372 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:23:12.619382 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:23:12.619393 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:23:12.619404 | orchestrator | 2025-09-27 21:23:12.619415 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:23:12.619426 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:23:12.619455 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-27 21:23:12.619467 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-27 21:23:12.619478 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-27 21:23:12.619489 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-27 21:23:12.619499 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-27 21:23:12.619515 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-27 21:23:12.619526 | orchestrator | 2025-09-27 21:23:12.619537 | orchestrator | 2025-09-27 21:23:12.619548 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:23:12.619559 | orchestrator | Saturday 27 September 2025 21:23:12 +0000 (0:00:01.910) 0:00:23.139 **** 2025-09-27 21:23:12.619570 | orchestrator | =============================================================================== 2025-09-27 21:23:12.619580 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.37s 2025-09-27 21:23:12.619591 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.09s 2025-09-27 21:23:12.619602 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.91s 2025-09-27 21:23:12.619613 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.42s 2025-09-27 21:23:12.619623 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.17s 2025-09-27 21:23:12.619634 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 0.97s 2025-09-27 21:23:12.619644 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 0.86s 2025-09-27 21:23:12.619655 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.66s 2025-09-27 21:23:12.619673 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.50s 2025-09-27 21:23:12.889773 | orchestrator | ++ semver latest 7.1.1 2025-09-27 21:23:12.941648 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-27 21:23:12.941731 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-27 21:23:12.941746 | orchestrator | + sudo systemctl restart manager.service 2025-09-27 21:23:26.547907 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-27 21:23:26.548029 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-27 21:23:26.548045 | orchestrator | + local max_attempts=60 2025-09-27 21:23:26.548056 | orchestrator | + local name=ceph-ansible 2025-09-27 21:23:26.548067 | orchestrator | + local attempt_num=1 2025-09-27 21:23:26.548079 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-27 21:23:26.592340 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-27 21:23:26.592439 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-27 21:23:26.592454 | orchestrator | + sleep 5 2025-09-27 21:23:31.598859 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-27 21:23:31.620675 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-27 21:23:31.620729 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-27 21:23:31.620743 | orchestrator | + sleep 5 2025-09-27 21:23:36.624149 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-27 21:23:36.666122 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-27 21:23:36.666193 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-27 21:23:36.666207 | orchestrator | + sleep 5 2025-09-27 21:23:41.671536 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-27 21:23:41.706272 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-27 21:23:41.706353 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-27 21:23:41.706367 | orchestrator | + sleep 5 2025-09-27 21:23:46.710453 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-27 21:23:46.748261 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-27 21:23:46.748351 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-27 21:23:46.748364 | orchestrator | + sleep 5 2025-09-27 21:23:51.753326 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-27 21:23:51.791739 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-27 21:23:51.791824 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-27 21:23:51.791837 | orchestrator | + sleep 5 2025-09-27 21:23:56.797167 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-27 21:23:56.835075 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-27 21:23:56.835133 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-27 21:23:56.835146 | orchestrator | + sleep 5 2025-09-27 21:24:01.837712 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-27 21:24:01.872723 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-27 21:24:01.872782 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-27 21:24:01.872795 | orchestrator | + sleep 5 2025-09-27 21:24:06.874892 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-27 21:24:06.950542 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-27 21:24:06.950620 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-27 21:24:06.950633 | orchestrator | + sleep 5 2025-09-27 21:24:11.953833 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-27 21:24:11.993898 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-27 21:24:11.993943 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-27 21:24:11.993957 | orchestrator | + sleep 5 2025-09-27 21:24:17.001720 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-27 21:24:17.036885 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-27 21:24:17.036960 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-27 21:24:17.036973 | orchestrator | + sleep 5 2025-09-27 21:24:22.042341 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-27 21:24:22.080461 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-27 21:24:22.080545 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-27 21:24:22.080558 | orchestrator | + sleep 5 2025-09-27 21:24:27.085500 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-27 21:24:27.125953 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-27 21:24:27.126105 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-27 21:24:27.126126 | orchestrator | + sleep 5 2025-09-27 21:24:32.131141 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-27 21:24:32.170753 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-27 21:24:32.170847 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-27 21:24:32.170863 | orchestrator | + local max_attempts=60 2025-09-27 21:24:32.170875 | orchestrator | + local name=kolla-ansible 2025-09-27 21:24:32.170887 | orchestrator | + local attempt_num=1 2025-09-27 21:24:32.170909 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-27 21:24:32.202839 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-27 21:24:32.202949 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-27 21:24:32.202968 | orchestrator | + local max_attempts=60 2025-09-27 21:24:32.202981 | orchestrator | + local name=osism-ansible 2025-09-27 21:24:32.202992 | orchestrator | + local attempt_num=1 2025-09-27 21:24:32.203004 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-27 21:24:32.234500 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-27 21:24:32.234551 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-27 21:24:32.234562 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-27 21:24:32.376063 | orchestrator | ARA in ceph-ansible already disabled. 2025-09-27 21:24:32.513803 | orchestrator | ARA in kolla-ansible already disabled. 2025-09-27 21:24:32.654815 | orchestrator | ARA in osism-ansible already disabled. 2025-09-27 21:24:32.800656 | orchestrator | ARA in osism-kubernetes already disabled. 2025-09-27 21:24:32.801422 | orchestrator | + osism apply gather-facts 2025-09-27 21:24:44.758204 | orchestrator | 2025-09-27 21:24:44 | INFO  | Task 462d5456-c47b-419e-bad8-a67238aae9aa (gather-facts) was prepared for execution. 2025-09-27 21:24:44.758306 | orchestrator | 2025-09-27 21:24:44 | INFO  | It takes a moment until task 462d5456-c47b-419e-bad8-a67238aae9aa (gather-facts) has been started and output is visible here. 2025-09-27 21:24:58.596681 | orchestrator | 2025-09-27 21:24:58.596785 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-27 21:24:58.596801 | orchestrator | 2025-09-27 21:24:58.596812 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-27 21:24:58.596822 | orchestrator | Saturday 27 September 2025 21:24:48 +0000 (0:00:00.216) 0:00:00.216 **** 2025-09-27 21:24:58.596833 | orchestrator | ok: [testbed-manager] 2025-09-27 21:24:58.596843 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:24:58.596853 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:24:58.596863 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:24:58.596872 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:24:58.596881 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:24:58.596891 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:24:58.596900 | orchestrator | 2025-09-27 21:24:58.596910 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-27 21:24:58.596919 | orchestrator | 2025-09-27 21:24:58.596929 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-27 21:24:58.596939 | orchestrator | Saturday 27 September 2025 21:24:57 +0000 (0:00:09.017) 0:00:09.234 **** 2025-09-27 21:24:58.596948 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:24:58.596959 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:24:58.596969 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:24:58.596978 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:24:58.596988 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:24:58.596997 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:24:58.597006 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:24:58.597016 | orchestrator | 2025-09-27 21:24:58.597025 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:24:58.597035 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-27 21:24:58.597046 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-27 21:24:58.597088 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-27 21:24:58.597098 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-27 21:24:58.597138 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-27 21:24:58.597148 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-27 21:24:58.597158 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-27 21:24:58.597167 | orchestrator | 2025-09-27 21:24:58.597176 | orchestrator | 2025-09-27 21:24:58.597185 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:24:58.597195 | orchestrator | Saturday 27 September 2025 21:24:58 +0000 (0:00:00.532) 0:00:09.767 **** 2025-09-27 21:24:58.597204 | orchestrator | =============================================================================== 2025-09-27 21:24:58.597213 | orchestrator | Gathers facts about hosts ----------------------------------------------- 9.02s 2025-09-27 21:24:58.597223 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2025-09-27 21:24:58.853099 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-09-27 21:24:58.863369 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-09-27 21:24:58.872853 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-09-27 21:24:58.883944 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-09-27 21:24:58.902320 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-09-27 21:24:58.916448 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-09-27 21:24:58.932075 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-09-27 21:24:58.946978 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-09-27 21:24:58.962146 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-09-27 21:24:58.979693 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-09-27 21:24:58.995238 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-09-27 21:24:59.012713 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-09-27 21:24:59.033955 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-09-27 21:24:59.047502 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-09-27 21:24:59.060539 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-09-27 21:24:59.069897 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-09-27 21:24:59.079357 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-09-27 21:24:59.088852 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-09-27 21:24:59.098269 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-09-27 21:24:59.106602 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-09-27 21:24:59.115947 | orchestrator | + [[ false == \t\r\u\e ]] 2025-09-27 21:24:59.513183 | orchestrator | ok: Runtime: 0:23:15.673361 2025-09-27 21:24:59.617215 | 2025-09-27 21:24:59.617345 | TASK [Deploy services] 2025-09-27 21:25:00.149035 | orchestrator | skipping: Conditional result was False 2025-09-27 21:25:00.167265 | 2025-09-27 21:25:00.167425 | TASK [Deploy in a nutshell] 2025-09-27 21:25:00.856292 | orchestrator | + set -e 2025-09-27 21:25:00.857726 | orchestrator | 2025-09-27 21:25:00.857742 | orchestrator | # PULL IMAGES 2025-09-27 21:25:00.857747 | orchestrator | 2025-09-27 21:25:00.857756 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-27 21:25:00.857765 | orchestrator | ++ export INTERACTIVE=false 2025-09-27 21:25:00.857771 | orchestrator | ++ INTERACTIVE=false 2025-09-27 21:25:00.857792 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-27 21:25:00.857802 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-27 21:25:00.857808 | orchestrator | + source /opt/manager-vars.sh 2025-09-27 21:25:00.857813 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-27 21:25:00.857821 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-27 21:25:00.857825 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-27 21:25:00.857832 | orchestrator | ++ CEPH_VERSION=reef 2025-09-27 21:25:00.857836 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-27 21:25:00.857843 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-27 21:25:00.857847 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-27 21:25:00.857854 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-27 21:25:00.857858 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-27 21:25:00.857863 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-27 21:25:00.857867 | orchestrator | ++ export ARA=false 2025-09-27 21:25:00.857871 | orchestrator | ++ ARA=false 2025-09-27 21:25:00.857875 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-27 21:25:00.857878 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-27 21:25:00.857882 | orchestrator | ++ export TEMPEST=false 2025-09-27 21:25:00.857886 | orchestrator | ++ TEMPEST=false 2025-09-27 21:25:00.857890 | orchestrator | ++ export IS_ZUUL=true 2025-09-27 21:25:00.857894 | orchestrator | ++ IS_ZUUL=true 2025-09-27 21:25:00.857897 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.199 2025-09-27 21:25:00.857901 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.199 2025-09-27 21:25:00.857905 | orchestrator | ++ export EXTERNAL_API=false 2025-09-27 21:25:00.857909 | orchestrator | ++ EXTERNAL_API=false 2025-09-27 21:25:00.857913 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-27 21:25:00.857917 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-27 21:25:00.857920 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-27 21:25:00.857924 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-27 21:25:00.857929 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-27 21:25:00.857932 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-27 21:25:00.857936 | orchestrator | + echo 2025-09-27 21:25:00.857945 | orchestrator | + echo '# PULL IMAGES' 2025-09-27 21:25:00.857949 | orchestrator | + echo 2025-09-27 21:25:00.858034 | orchestrator | ++ semver latest 7.0.0 2025-09-27 21:25:00.916007 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-27 21:25:00.916050 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-27 21:25:00.916056 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-09-27 21:25:02.737519 | orchestrator | 2025-09-27 21:25:02 | INFO  | Trying to run play pull-images in environment custom 2025-09-27 21:25:12.848160 | orchestrator | 2025-09-27 21:25:12 | INFO  | Task 0d523edf-4ec4-42b0-b34c-63fdd360df38 (pull-images) was prepared for execution. 2025-09-27 21:25:12.848274 | orchestrator | 2025-09-27 21:25:12 | INFO  | Task 0d523edf-4ec4-42b0-b34c-63fdd360df38 is running in background. No more output. Check ARA for logs. 2025-09-27 21:25:14.736461 | orchestrator | 2025-09-27 21:25:14 | INFO  | Trying to run play wipe-partitions in environment custom 2025-09-27 21:25:24.900646 | orchestrator | 2025-09-27 21:25:24 | INFO  | Task 41d795cb-2924-4fe2-87a5-b47a9765dc1d (wipe-partitions) was prepared for execution. 2025-09-27 21:25:24.900779 | orchestrator | 2025-09-27 21:25:24 | INFO  | It takes a moment until task 41d795cb-2924-4fe2-87a5-b47a9765dc1d (wipe-partitions) has been started and output is visible here. 2025-09-27 21:25:38.776102 | orchestrator | 2025-09-27 21:25:38.776225 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-09-27 21:25:38.776243 | orchestrator | 2025-09-27 21:25:38.776256 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-09-27 21:25:38.776282 | orchestrator | Saturday 27 September 2025 21:25:29 +0000 (0:00:00.131) 0:00:00.131 **** 2025-09-27 21:25:38.776293 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:25:38.776306 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:25:38.776317 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:25:38.776328 | orchestrator | 2025-09-27 21:25:38.776339 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-09-27 21:25:38.776383 | orchestrator | Saturday 27 September 2025 21:25:30 +0000 (0:00:00.566) 0:00:00.698 **** 2025-09-27 21:25:38.776395 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:25:38.776406 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:25:38.776421 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:25:38.776432 | orchestrator | 2025-09-27 21:25:38.776444 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-09-27 21:25:38.776455 | orchestrator | Saturday 27 September 2025 21:25:30 +0000 (0:00:00.229) 0:00:00.927 **** 2025-09-27 21:25:38.776496 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:25:38.776509 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:25:38.776520 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:25:38.776531 | orchestrator | 2025-09-27 21:25:38.776542 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-09-27 21:25:38.776553 | orchestrator | Saturday 27 September 2025 21:25:31 +0000 (0:00:00.744) 0:00:01.672 **** 2025-09-27 21:25:38.776564 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:25:38.776575 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:25:38.776586 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:25:38.776611 | orchestrator | 2025-09-27 21:25:38.776623 | orchestrator | TASK [Check device availability] *********************************************** 2025-09-27 21:25:38.776644 | orchestrator | Saturday 27 September 2025 21:25:31 +0000 (0:00:00.248) 0:00:01.920 **** 2025-09-27 21:25:38.776656 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-27 21:25:38.776672 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-27 21:25:38.776683 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-27 21:25:38.776694 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-27 21:25:38.776704 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-27 21:25:38.776715 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-27 21:25:38.776725 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-27 21:25:38.776736 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-27 21:25:38.776747 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-27 21:25:38.776757 | orchestrator | 2025-09-27 21:25:38.776768 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-09-27 21:25:38.776780 | orchestrator | Saturday 27 September 2025 21:25:33 +0000 (0:00:02.205) 0:00:04.126 **** 2025-09-27 21:25:38.776791 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-09-27 21:25:38.776802 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-09-27 21:25:38.776812 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-09-27 21:25:38.776823 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-09-27 21:25:38.776834 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-09-27 21:25:38.776844 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-09-27 21:25:38.776855 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-09-27 21:25:38.776865 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-09-27 21:25:38.776876 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-09-27 21:25:38.776886 | orchestrator | 2025-09-27 21:25:38.776897 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-09-27 21:25:38.776908 | orchestrator | Saturday 27 September 2025 21:25:34 +0000 (0:00:01.364) 0:00:05.490 **** 2025-09-27 21:25:38.776919 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-27 21:25:38.776929 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-27 21:25:38.776940 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-27 21:25:38.776951 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-27 21:25:38.776961 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-27 21:25:38.776972 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-27 21:25:38.776982 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-27 21:25:38.777002 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-27 21:25:38.777023 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-27 21:25:38.777054 | orchestrator | 2025-09-27 21:25:38.777066 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-09-27 21:25:38.777077 | orchestrator | Saturday 27 September 2025 21:25:37 +0000 (0:00:02.345) 0:00:07.836 **** 2025-09-27 21:25:38.777087 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:25:38.777098 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:25:38.777109 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:25:38.777119 | orchestrator | 2025-09-27 21:25:38.777130 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-09-27 21:25:38.777141 | orchestrator | Saturday 27 September 2025 21:25:37 +0000 (0:00:00.591) 0:00:08.427 **** 2025-09-27 21:25:38.777151 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:25:38.777162 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:25:38.777173 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:25:38.777183 | orchestrator | 2025-09-27 21:25:38.777194 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:25:38.777207 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:25:38.777220 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:25:38.777249 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:25:38.777261 | orchestrator | 2025-09-27 21:25:38.777272 | orchestrator | 2025-09-27 21:25:38.777283 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:25:38.777293 | orchestrator | Saturday 27 September 2025 21:25:38 +0000 (0:00:00.638) 0:00:09.066 **** 2025-09-27 21:25:38.777304 | orchestrator | =============================================================================== 2025-09-27 21:25:38.777315 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.35s 2025-09-27 21:25:38.777326 | orchestrator | Check device availability ----------------------------------------------- 2.21s 2025-09-27 21:25:38.777337 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.36s 2025-09-27 21:25:38.777347 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.74s 2025-09-27 21:25:38.777358 | orchestrator | Request device events from the kernel ----------------------------------- 0.64s 2025-09-27 21:25:38.777369 | orchestrator | Reload udev rules ------------------------------------------------------- 0.59s 2025-09-27 21:25:38.777379 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.57s 2025-09-27 21:25:38.777390 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.25s 2025-09-27 21:25:38.777400 | orchestrator | Remove all rook related logical devices --------------------------------- 0.23s 2025-09-27 21:25:50.959375 | orchestrator | 2025-09-27 21:25:50 | INFO  | Task e109e8e0-f674-418c-80a6-02a5dfe7245a (facts) was prepared for execution. 2025-09-27 21:25:50.959477 | orchestrator | 2025-09-27 21:25:50 | INFO  | It takes a moment until task e109e8e0-f674-418c-80a6-02a5dfe7245a (facts) has been started and output is visible here. 2025-09-27 21:26:03.776260 | orchestrator | 2025-09-27 21:26:03.776346 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-27 21:26:03.776354 | orchestrator | 2025-09-27 21:26:03.776359 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-27 21:26:03.776364 | orchestrator | Saturday 27 September 2025 21:25:54 +0000 (0:00:00.261) 0:00:00.261 **** 2025-09-27 21:26:03.776368 | orchestrator | ok: [testbed-manager] 2025-09-27 21:26:03.776374 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:26:03.776378 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:26:03.776401 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:26:03.776405 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:26:03.776409 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:26:03.776413 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:26:03.776417 | orchestrator | 2025-09-27 21:26:03.776421 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-27 21:26:03.776425 | orchestrator | Saturday 27 September 2025 21:25:55 +0000 (0:00:01.053) 0:00:01.315 **** 2025-09-27 21:26:03.776429 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:26:03.776434 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:26:03.776439 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:26:03.776442 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:26:03.776447 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:26:03.776450 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:26:03.776454 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:26:03.776458 | orchestrator | 2025-09-27 21:26:03.776462 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-27 21:26:03.776466 | orchestrator | 2025-09-27 21:26:03.776480 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-27 21:26:03.776484 | orchestrator | Saturday 27 September 2025 21:25:57 +0000 (0:00:01.173) 0:00:02.488 **** 2025-09-27 21:26:03.776488 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:26:03.776492 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:26:03.776496 | orchestrator | ok: [testbed-manager] 2025-09-27 21:26:03.776500 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:26:03.776504 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:26:03.776508 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:26:03.776512 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:26:03.776516 | orchestrator | 2025-09-27 21:26:03.776520 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-27 21:26:03.776524 | orchestrator | 2025-09-27 21:26:03.776528 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-27 21:26:03.776532 | orchestrator | Saturday 27 September 2025 21:26:02 +0000 (0:00:05.858) 0:00:08.347 **** 2025-09-27 21:26:03.776536 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:26:03.776540 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:26:03.776544 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:26:03.776562 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:26:03.776566 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:26:03.776570 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:26:03.776574 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:26:03.776580 | orchestrator | 2025-09-27 21:26:03.776587 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:26:03.776594 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:26:03.776603 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:26:03.776609 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:26:03.776616 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:26:03.776623 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:26:03.776630 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:26:03.776637 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:26:03.776643 | orchestrator | 2025-09-27 21:26:03.776652 | orchestrator | 2025-09-27 21:26:03.776656 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:26:03.776660 | orchestrator | Saturday 27 September 2025 21:26:03 +0000 (0:00:00.504) 0:00:08.852 **** 2025-09-27 21:26:03.776664 | orchestrator | =============================================================================== 2025-09-27 21:26:03.776668 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.86s 2025-09-27 21:26:03.776672 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.17s 2025-09-27 21:26:03.776676 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.05s 2025-09-27 21:26:03.776680 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2025-09-27 21:26:05.911223 | orchestrator | 2025-09-27 21:26:05 | INFO  | Task cde6f0c4-df8f-46c9-a66f-24e5b28bff32 (ceph-configure-lvm-volumes) was prepared for execution. 2025-09-27 21:26:05.911368 | orchestrator | 2025-09-27 21:26:05 | INFO  | It takes a moment until task cde6f0c4-df8f-46c9-a66f-24e5b28bff32 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-09-27 21:26:17.114546 | orchestrator | 2025-09-27 21:26:17.114600 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-27 21:26:17.114606 | orchestrator | 2025-09-27 21:26:17.114610 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-27 21:26:17.114616 | orchestrator | Saturday 27 September 2025 21:26:09 +0000 (0:00:00.306) 0:00:00.306 **** 2025-09-27 21:26:17.114621 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-27 21:26:17.114625 | orchestrator | 2025-09-27 21:26:17.114629 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-27 21:26:17.114633 | orchestrator | Saturday 27 September 2025 21:26:10 +0000 (0:00:00.228) 0:00:00.535 **** 2025-09-27 21:26:17.114637 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:26:17.114642 | orchestrator | 2025-09-27 21:26:17.114646 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:17.114650 | orchestrator | Saturday 27 September 2025 21:26:10 +0000 (0:00:00.220) 0:00:00.755 **** 2025-09-27 21:26:17.114654 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-27 21:26:17.114659 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-27 21:26:17.114663 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-27 21:26:17.114673 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-27 21:26:17.114677 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-27 21:26:17.114681 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-27 21:26:17.114685 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-27 21:26:17.114689 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-27 21:26:17.114693 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-27 21:26:17.114697 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-27 21:26:17.114700 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-27 21:26:17.114704 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-27 21:26:17.114708 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-27 21:26:17.114711 | orchestrator | 2025-09-27 21:26:17.114715 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:17.114719 | orchestrator | Saturday 27 September 2025 21:26:10 +0000 (0:00:00.352) 0:00:01.108 **** 2025-09-27 21:26:17.114723 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:26:17.114738 | orchestrator | 2025-09-27 21:26:17.114742 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:17.114745 | orchestrator | Saturday 27 September 2025 21:26:11 +0000 (0:00:00.433) 0:00:01.542 **** 2025-09-27 21:26:17.114749 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:26:17.114753 | orchestrator | 2025-09-27 21:26:17.114757 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:17.114760 | orchestrator | Saturday 27 September 2025 21:26:11 +0000 (0:00:00.176) 0:00:01.718 **** 2025-09-27 21:26:17.114764 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:26:17.114768 | orchestrator | 2025-09-27 21:26:17.114771 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:17.114775 | orchestrator | Saturday 27 September 2025 21:26:11 +0000 (0:00:00.192) 0:00:01.910 **** 2025-09-27 21:26:17.114779 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:26:17.114785 | orchestrator | 2025-09-27 21:26:17.114788 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:17.114792 | orchestrator | Saturday 27 September 2025 21:26:11 +0000 (0:00:00.188) 0:00:02.099 **** 2025-09-27 21:26:17.114796 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:26:17.114800 | orchestrator | 2025-09-27 21:26:17.114804 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:17.114807 | orchestrator | Saturday 27 September 2025 21:26:11 +0000 (0:00:00.188) 0:00:02.287 **** 2025-09-27 21:26:17.114811 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:26:17.114815 | orchestrator | 2025-09-27 21:26:17.114818 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:17.114822 | orchestrator | Saturday 27 September 2025 21:26:12 +0000 (0:00:00.213) 0:00:02.501 **** 2025-09-27 21:26:17.114826 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:26:17.114830 | orchestrator | 2025-09-27 21:26:17.114833 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:17.114837 | orchestrator | Saturday 27 September 2025 21:26:12 +0000 (0:00:00.191) 0:00:02.692 **** 2025-09-27 21:26:17.114841 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:26:17.114844 | orchestrator | 2025-09-27 21:26:17.114848 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:17.114852 | orchestrator | Saturday 27 September 2025 21:26:12 +0000 (0:00:00.191) 0:00:02.883 **** 2025-09-27 21:26:17.114856 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163) 2025-09-27 21:26:17.114861 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163) 2025-09-27 21:26:17.114865 | orchestrator | 2025-09-27 21:26:17.114868 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:17.114872 | orchestrator | Saturday 27 September 2025 21:26:12 +0000 (0:00:00.388) 0:00:03.272 **** 2025-09-27 21:26:17.114882 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a92b9860-302a-4dfa-9a5b-f64375177990) 2025-09-27 21:26:17.114886 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a92b9860-302a-4dfa-9a5b-f64375177990) 2025-09-27 21:26:17.114889 | orchestrator | 2025-09-27 21:26:17.114893 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:17.114897 | orchestrator | Saturday 27 September 2025 21:26:13 +0000 (0:00:00.404) 0:00:03.677 **** 2025-09-27 21:26:17.114904 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1d27bfee-58fc-413a-aadf-ce708d3c762a) 2025-09-27 21:26:17.114907 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1d27bfee-58fc-413a-aadf-ce708d3c762a) 2025-09-27 21:26:17.114911 | orchestrator | 2025-09-27 21:26:17.114915 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:17.114919 | orchestrator | Saturday 27 September 2025 21:26:13 +0000 (0:00:00.589) 0:00:04.266 **** 2025-09-27 21:26:17.114922 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_57fc99d7-7aa7-4d8e-bac5-79cb8f64eb7c) 2025-09-27 21:26:17.114932 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_57fc99d7-7aa7-4d8e-bac5-79cb8f64eb7c) 2025-09-27 21:26:17.114936 | orchestrator | 2025-09-27 21:26:17.114940 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:17.114943 | orchestrator | Saturday 27 September 2025 21:26:14 +0000 (0:00:00.587) 0:00:04.854 **** 2025-09-27 21:26:17.114947 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-27 21:26:17.114951 | orchestrator | 2025-09-27 21:26:17.114955 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:17.114959 | orchestrator | Saturday 27 September 2025 21:26:15 +0000 (0:00:00.656) 0:00:05.511 **** 2025-09-27 21:26:17.114963 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-27 21:26:17.114966 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-27 21:26:17.114970 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-27 21:26:17.114974 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-27 21:26:17.114977 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-27 21:26:17.115036 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-27 21:26:17.115041 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-27 21:26:17.115045 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-27 21:26:17.115049 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-27 21:26:17.115052 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-27 21:26:17.115056 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-27 21:26:17.115060 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-27 21:26:17.115064 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-27 21:26:17.115068 | orchestrator | 2025-09-27 21:26:17.115071 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:17.115075 | orchestrator | Saturday 27 September 2025 21:26:15 +0000 (0:00:00.394) 0:00:05.906 **** 2025-09-27 21:26:17.115079 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:26:17.115083 | orchestrator | 2025-09-27 21:26:17.115087 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:17.115090 | orchestrator | Saturday 27 September 2025 21:26:15 +0000 (0:00:00.197) 0:00:06.103 **** 2025-09-27 21:26:17.115094 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:26:17.115098 | orchestrator | 2025-09-27 21:26:17.115102 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:17.115105 | orchestrator | Saturday 27 September 2025 21:26:15 +0000 (0:00:00.183) 0:00:06.286 **** 2025-09-27 21:26:17.115109 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:26:17.115113 | orchestrator | 2025-09-27 21:26:17.115117 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:17.115120 | orchestrator | Saturday 27 September 2025 21:26:16 +0000 (0:00:00.194) 0:00:06.481 **** 2025-09-27 21:26:17.115124 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:26:17.115128 | orchestrator | 2025-09-27 21:26:17.115132 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:17.115135 | orchestrator | Saturday 27 September 2025 21:26:16 +0000 (0:00:00.190) 0:00:06.672 **** 2025-09-27 21:26:17.115139 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:26:17.115143 | orchestrator | 2025-09-27 21:26:17.115150 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:17.115154 | orchestrator | Saturday 27 September 2025 21:26:16 +0000 (0:00:00.195) 0:00:06.867 **** 2025-09-27 21:26:17.115158 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:26:17.115162 | orchestrator | 2025-09-27 21:26:17.115165 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:17.115169 | orchestrator | Saturday 27 September 2025 21:26:16 +0000 (0:00:00.189) 0:00:07.057 **** 2025-09-27 21:26:17.115173 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:26:17.115177 | orchestrator | 2025-09-27 21:26:17.115180 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:17.115184 | orchestrator | Saturday 27 September 2025 21:26:16 +0000 (0:00:00.195) 0:00:07.252 **** 2025-09-27 21:26:17.115191 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:26:24.387269 | orchestrator | 2025-09-27 21:26:24.387342 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:24.387351 | orchestrator | Saturday 27 September 2025 21:26:17 +0000 (0:00:00.197) 0:00:07.450 **** 2025-09-27 21:26:24.387358 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-27 21:26:24.387366 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-27 21:26:24.387372 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-27 21:26:24.387378 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-27 21:26:24.387384 | orchestrator | 2025-09-27 21:26:24.387390 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:24.387396 | orchestrator | Saturday 27 September 2025 21:26:18 +0000 (0:00:00.999) 0:00:08.449 **** 2025-09-27 21:26:24.387412 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:26:24.387419 | orchestrator | 2025-09-27 21:26:24.387425 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:24.387431 | orchestrator | Saturday 27 September 2025 21:26:18 +0000 (0:00:00.199) 0:00:08.649 **** 2025-09-27 21:26:24.387436 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:26:24.387442 | orchestrator | 2025-09-27 21:26:24.387448 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:24.387454 | orchestrator | Saturday 27 September 2025 21:26:18 +0000 (0:00:00.229) 0:00:08.878 **** 2025-09-27 21:26:24.387459 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:26:24.387465 | orchestrator | 2025-09-27 21:26:24.387471 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:24.387477 | orchestrator | Saturday 27 September 2025 21:26:18 +0000 (0:00:00.201) 0:00:09.080 **** 2025-09-27 21:26:24.387483 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:26:24.387488 | orchestrator | 2025-09-27 21:26:24.387494 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-27 21:26:24.387500 | orchestrator | Saturday 27 September 2025 21:26:18 +0000 (0:00:00.193) 0:00:09.274 **** 2025-09-27 21:26:24.387506 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-09-27 21:26:24.387512 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-09-27 21:26:24.387518 | orchestrator | 2025-09-27 21:26:24.387523 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-27 21:26:24.387529 | orchestrator | Saturday 27 September 2025 21:26:19 +0000 (0:00:00.172) 0:00:09.446 **** 2025-09-27 21:26:24.387535 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:26:24.387541 | orchestrator | 2025-09-27 21:26:24.387546 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-27 21:26:24.387552 | orchestrator | Saturday 27 September 2025 21:26:19 +0000 (0:00:00.138) 0:00:09.585 **** 2025-09-27 21:26:24.387558 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:26:24.387564 | orchestrator | 2025-09-27 21:26:24.387570 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-27 21:26:24.387575 | orchestrator | Saturday 27 September 2025 21:26:19 +0000 (0:00:00.137) 0:00:09.723 **** 2025-09-27 21:26:24.387581 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:26:24.387602 | orchestrator | 2025-09-27 21:26:24.387609 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-27 21:26:24.387615 | orchestrator | Saturday 27 September 2025 21:26:19 +0000 (0:00:00.152) 0:00:09.875 **** 2025-09-27 21:26:24.387620 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:26:24.387627 | orchestrator | 2025-09-27 21:26:24.387632 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-27 21:26:24.387638 | orchestrator | Saturday 27 September 2025 21:26:19 +0000 (0:00:00.144) 0:00:10.020 **** 2025-09-27 21:26:24.387645 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c2ef8475-4f12-50de-ab79-c841a7bfbe3d'}}) 2025-09-27 21:26:24.387651 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e5968580-5dd1-5a87-a5e5-bc9ba69f72d9'}}) 2025-09-27 21:26:24.387657 | orchestrator | 2025-09-27 21:26:24.387663 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-27 21:26:24.387669 | orchestrator | Saturday 27 September 2025 21:26:19 +0000 (0:00:00.159) 0:00:10.179 **** 2025-09-27 21:26:24.387675 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c2ef8475-4f12-50de-ab79-c841a7bfbe3d'}})  2025-09-27 21:26:24.387687 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e5968580-5dd1-5a87-a5e5-bc9ba69f72d9'}})  2025-09-27 21:26:24.387693 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:26:24.387699 | orchestrator | 2025-09-27 21:26:24.387705 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-27 21:26:24.387711 | orchestrator | Saturday 27 September 2025 21:26:19 +0000 (0:00:00.155) 0:00:10.335 **** 2025-09-27 21:26:24.387717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c2ef8475-4f12-50de-ab79-c841a7bfbe3d'}})  2025-09-27 21:26:24.387722 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e5968580-5dd1-5a87-a5e5-bc9ba69f72d9'}})  2025-09-27 21:26:24.387728 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:26:24.387734 | orchestrator | 2025-09-27 21:26:24.387740 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-27 21:26:24.387746 | orchestrator | Saturday 27 September 2025 21:26:20 +0000 (0:00:00.409) 0:00:10.744 **** 2025-09-27 21:26:24.387751 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c2ef8475-4f12-50de-ab79-c841a7bfbe3d'}})  2025-09-27 21:26:24.387757 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e5968580-5dd1-5a87-a5e5-bc9ba69f72d9'}})  2025-09-27 21:26:24.387763 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:26:24.387769 | orchestrator | 2025-09-27 21:26:24.387784 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-27 21:26:24.387791 | orchestrator | Saturday 27 September 2025 21:26:20 +0000 (0:00:00.152) 0:00:10.897 **** 2025-09-27 21:26:24.387796 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:26:24.387802 | orchestrator | 2025-09-27 21:26:24.387808 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-27 21:26:24.387814 | orchestrator | Saturday 27 September 2025 21:26:20 +0000 (0:00:00.141) 0:00:11.039 **** 2025-09-27 21:26:24.387820 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:26:24.387826 | orchestrator | 2025-09-27 21:26:24.387832 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-27 21:26:24.387838 | orchestrator | Saturday 27 September 2025 21:26:20 +0000 (0:00:00.146) 0:00:11.186 **** 2025-09-27 21:26:24.387844 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:26:24.387849 | orchestrator | 2025-09-27 21:26:24.387855 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-27 21:26:24.387861 | orchestrator | Saturday 27 September 2025 21:26:20 +0000 (0:00:00.140) 0:00:11.326 **** 2025-09-27 21:26:24.387867 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:26:24.387873 | orchestrator | 2025-09-27 21:26:24.387883 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-27 21:26:24.387889 | orchestrator | Saturday 27 September 2025 21:26:21 +0000 (0:00:00.125) 0:00:11.451 **** 2025-09-27 21:26:24.387895 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:26:24.387901 | orchestrator | 2025-09-27 21:26:24.387907 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-27 21:26:24.387913 | orchestrator | Saturday 27 September 2025 21:26:21 +0000 (0:00:00.123) 0:00:11.574 **** 2025-09-27 21:26:24.387919 | orchestrator | ok: [testbed-node-3] => { 2025-09-27 21:26:24.387924 | orchestrator |  "ceph_osd_devices": { 2025-09-27 21:26:24.387930 | orchestrator |  "sdb": { 2025-09-27 21:26:24.387936 | orchestrator |  "osd_lvm_uuid": "c2ef8475-4f12-50de-ab79-c841a7bfbe3d" 2025-09-27 21:26:24.387942 | orchestrator |  }, 2025-09-27 21:26:24.387948 | orchestrator |  "sdc": { 2025-09-27 21:26:24.387954 | orchestrator |  "osd_lvm_uuid": "e5968580-5dd1-5a87-a5e5-bc9ba69f72d9" 2025-09-27 21:26:24.387960 | orchestrator |  } 2025-09-27 21:26:24.387965 | orchestrator |  } 2025-09-27 21:26:24.387985 | orchestrator | } 2025-09-27 21:26:24.387991 | orchestrator | 2025-09-27 21:26:24.387997 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-27 21:26:24.388003 | orchestrator | Saturday 27 September 2025 21:26:21 +0000 (0:00:00.138) 0:00:11.713 **** 2025-09-27 21:26:24.388009 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:26:24.388015 | orchestrator | 2025-09-27 21:26:24.388021 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-27 21:26:24.388026 | orchestrator | Saturday 27 September 2025 21:26:21 +0000 (0:00:00.141) 0:00:11.854 **** 2025-09-27 21:26:24.388036 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:26:24.388042 | orchestrator | 2025-09-27 21:26:24.388048 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-27 21:26:24.388053 | orchestrator | Saturday 27 September 2025 21:26:21 +0000 (0:00:00.128) 0:00:11.983 **** 2025-09-27 21:26:24.388059 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:26:24.388065 | orchestrator | 2025-09-27 21:26:24.388071 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-27 21:26:24.388077 | orchestrator | Saturday 27 September 2025 21:26:21 +0000 (0:00:00.126) 0:00:12.109 **** 2025-09-27 21:26:24.388082 | orchestrator | changed: [testbed-node-3] => { 2025-09-27 21:26:24.388088 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-27 21:26:24.388094 | orchestrator |  "ceph_osd_devices": { 2025-09-27 21:26:24.388100 | orchestrator |  "sdb": { 2025-09-27 21:26:24.388106 | orchestrator |  "osd_lvm_uuid": "c2ef8475-4f12-50de-ab79-c841a7bfbe3d" 2025-09-27 21:26:24.388112 | orchestrator |  }, 2025-09-27 21:26:24.388118 | orchestrator |  "sdc": { 2025-09-27 21:26:24.388123 | orchestrator |  "osd_lvm_uuid": "e5968580-5dd1-5a87-a5e5-bc9ba69f72d9" 2025-09-27 21:26:24.388129 | orchestrator |  } 2025-09-27 21:26:24.388135 | orchestrator |  }, 2025-09-27 21:26:24.388141 | orchestrator |  "lvm_volumes": [ 2025-09-27 21:26:24.388147 | orchestrator |  { 2025-09-27 21:26:24.388152 | orchestrator |  "data": "osd-block-c2ef8475-4f12-50de-ab79-c841a7bfbe3d", 2025-09-27 21:26:24.388158 | orchestrator |  "data_vg": "ceph-c2ef8475-4f12-50de-ab79-c841a7bfbe3d" 2025-09-27 21:26:24.388164 | orchestrator |  }, 2025-09-27 21:26:24.388170 | orchestrator |  { 2025-09-27 21:26:24.388176 | orchestrator |  "data": "osd-block-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9", 2025-09-27 21:26:24.388182 | orchestrator |  "data_vg": "ceph-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9" 2025-09-27 21:26:24.388187 | orchestrator |  } 2025-09-27 21:26:24.388193 | orchestrator |  ] 2025-09-27 21:26:24.388199 | orchestrator |  } 2025-09-27 21:26:24.388205 | orchestrator | } 2025-09-27 21:26:24.388211 | orchestrator | 2025-09-27 21:26:24.388216 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-27 21:26:24.388227 | orchestrator | Saturday 27 September 2025 21:26:22 +0000 (0:00:00.359) 0:00:12.469 **** 2025-09-27 21:26:24.388233 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-27 21:26:24.388239 | orchestrator | 2025-09-27 21:26:24.388244 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-27 21:26:24.388250 | orchestrator | 2025-09-27 21:26:24.388256 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-27 21:26:24.388262 | orchestrator | Saturday 27 September 2025 21:26:23 +0000 (0:00:01.769) 0:00:14.238 **** 2025-09-27 21:26:24.388268 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-27 21:26:24.388274 | orchestrator | 2025-09-27 21:26:24.388280 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-27 21:26:24.388285 | orchestrator | Saturday 27 September 2025 21:26:24 +0000 (0:00:00.253) 0:00:14.492 **** 2025-09-27 21:26:24.388291 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:26:24.388297 | orchestrator | 2025-09-27 21:26:24.388303 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:24.388312 | orchestrator | Saturday 27 September 2025 21:26:24 +0000 (0:00:00.227) 0:00:14.719 **** 2025-09-27 21:26:32.108259 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-27 21:26:32.108345 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-27 21:26:32.108359 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-27 21:26:32.108370 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-27 21:26:32.108380 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-27 21:26:32.108391 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-27 21:26:32.108401 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-27 21:26:32.108412 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-27 21:26:32.108423 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-27 21:26:32.108433 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-27 21:26:32.108458 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-27 21:26:32.108470 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-27 21:26:32.108480 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-27 21:26:32.108495 | orchestrator | 2025-09-27 21:26:32.108506 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:32.108518 | orchestrator | Saturday 27 September 2025 21:26:24 +0000 (0:00:00.380) 0:00:15.100 **** 2025-09-27 21:26:32.108529 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:26:32.108541 | orchestrator | 2025-09-27 21:26:32.108551 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:32.108562 | orchestrator | Saturday 27 September 2025 21:26:24 +0000 (0:00:00.200) 0:00:15.301 **** 2025-09-27 21:26:32.108573 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:26:32.108583 | orchestrator | 2025-09-27 21:26:32.108594 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:32.108605 | orchestrator | Saturday 27 September 2025 21:26:25 +0000 (0:00:00.196) 0:00:15.497 **** 2025-09-27 21:26:32.108615 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:26:32.108626 | orchestrator | 2025-09-27 21:26:32.108637 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:32.108647 | orchestrator | Saturday 27 September 2025 21:26:25 +0000 (0:00:00.189) 0:00:15.687 **** 2025-09-27 21:26:32.108658 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:26:32.108690 | orchestrator | 2025-09-27 21:26:32.108701 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:32.108712 | orchestrator | Saturday 27 September 2025 21:26:25 +0000 (0:00:00.196) 0:00:15.883 **** 2025-09-27 21:26:32.108722 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:26:32.108733 | orchestrator | 2025-09-27 21:26:32.108743 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:32.108754 | orchestrator | Saturday 27 September 2025 21:26:26 +0000 (0:00:00.626) 0:00:16.510 **** 2025-09-27 21:26:32.108764 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:26:32.108775 | orchestrator | 2025-09-27 21:26:32.108786 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:32.108796 | orchestrator | Saturday 27 September 2025 21:26:26 +0000 (0:00:00.209) 0:00:16.719 **** 2025-09-27 21:26:32.108807 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:26:32.108817 | orchestrator | 2025-09-27 21:26:32.108828 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:32.108839 | orchestrator | Saturday 27 September 2025 21:26:26 +0000 (0:00:00.190) 0:00:16.910 **** 2025-09-27 21:26:32.108849 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:26:32.108860 | orchestrator | 2025-09-27 21:26:32.108871 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:32.108881 | orchestrator | Saturday 27 September 2025 21:26:26 +0000 (0:00:00.207) 0:00:17.117 **** 2025-09-27 21:26:32.108892 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b) 2025-09-27 21:26:32.108904 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b) 2025-09-27 21:26:32.108914 | orchestrator | 2025-09-27 21:26:32.108925 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:32.108936 | orchestrator | Saturday 27 September 2025 21:26:27 +0000 (0:00:00.410) 0:00:17.528 **** 2025-09-27 21:26:32.108947 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_13607e9c-06d4-4fec-b04d-15514859d6a0) 2025-09-27 21:26:32.108957 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_13607e9c-06d4-4fec-b04d-15514859d6a0) 2025-09-27 21:26:32.108994 | orchestrator | 2025-09-27 21:26:32.109005 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:32.109016 | orchestrator | Saturday 27 September 2025 21:26:27 +0000 (0:00:00.400) 0:00:17.929 **** 2025-09-27 21:26:32.109027 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_00c7ac73-0c66-4cdd-8f79-353d0386cdac) 2025-09-27 21:26:32.109038 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_00c7ac73-0c66-4cdd-8f79-353d0386cdac) 2025-09-27 21:26:32.109048 | orchestrator | 2025-09-27 21:26:32.109059 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:32.109070 | orchestrator | Saturday 27 September 2025 21:26:27 +0000 (0:00:00.410) 0:00:18.339 **** 2025-09-27 21:26:32.109096 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f7aa810c-750c-432b-b053-2bc489acb9c9) 2025-09-27 21:26:32.109107 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f7aa810c-750c-432b-b053-2bc489acb9c9) 2025-09-27 21:26:32.109118 | orchestrator | 2025-09-27 21:26:32.109129 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:32.109140 | orchestrator | Saturday 27 September 2025 21:26:28 +0000 (0:00:00.550) 0:00:18.889 **** 2025-09-27 21:26:32.109151 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-27 21:26:32.109161 | orchestrator | 2025-09-27 21:26:32.109172 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:32.109194 | orchestrator | Saturday 27 September 2025 21:26:28 +0000 (0:00:00.407) 0:00:19.297 **** 2025-09-27 21:26:32.109206 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-27 21:26:32.109224 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-27 21:26:32.109235 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-27 21:26:32.109246 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-27 21:26:32.109257 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-27 21:26:32.109267 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-27 21:26:32.109278 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-27 21:26:32.109289 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-27 21:26:32.109299 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-27 21:26:32.109310 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-27 21:26:32.109320 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-27 21:26:32.109331 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-27 21:26:32.109342 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-27 21:26:32.109352 | orchestrator | 2025-09-27 21:26:32.109363 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:32.109374 | orchestrator | Saturday 27 September 2025 21:26:29 +0000 (0:00:00.386) 0:00:19.683 **** 2025-09-27 21:26:32.109384 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:26:32.109395 | orchestrator | 2025-09-27 21:26:32.109406 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:32.109417 | orchestrator | Saturday 27 September 2025 21:26:29 +0000 (0:00:00.193) 0:00:19.877 **** 2025-09-27 21:26:32.109427 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:26:32.109438 | orchestrator | 2025-09-27 21:26:32.109449 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:32.109459 | orchestrator | Saturday 27 September 2025 21:26:30 +0000 (0:00:00.624) 0:00:20.502 **** 2025-09-27 21:26:32.109470 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:26:32.109480 | orchestrator | 2025-09-27 21:26:32.109491 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:32.109502 | orchestrator | Saturday 27 September 2025 21:26:30 +0000 (0:00:00.196) 0:00:20.698 **** 2025-09-27 21:26:32.109513 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:26:32.109523 | orchestrator | 2025-09-27 21:26:32.109534 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:32.109545 | orchestrator | Saturday 27 September 2025 21:26:30 +0000 (0:00:00.192) 0:00:20.890 **** 2025-09-27 21:26:32.109556 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:26:32.109566 | orchestrator | 2025-09-27 21:26:32.109577 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:32.109588 | orchestrator | Saturday 27 September 2025 21:26:30 +0000 (0:00:00.147) 0:00:21.037 **** 2025-09-27 21:26:32.109599 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:26:32.109609 | orchestrator | 2025-09-27 21:26:32.109620 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:32.109631 | orchestrator | Saturday 27 September 2025 21:26:30 +0000 (0:00:00.206) 0:00:21.244 **** 2025-09-27 21:26:32.109641 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:26:32.109652 | orchestrator | 2025-09-27 21:26:32.109663 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:32.109673 | orchestrator | Saturday 27 September 2025 21:26:31 +0000 (0:00:00.169) 0:00:21.414 **** 2025-09-27 21:26:32.109684 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:26:32.109695 | orchestrator | 2025-09-27 21:26:32.109705 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:32.109722 | orchestrator | Saturday 27 September 2025 21:26:31 +0000 (0:00:00.166) 0:00:21.580 **** 2025-09-27 21:26:32.109733 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-27 21:26:32.109744 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-27 21:26:32.109755 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-27 21:26:32.109765 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-27 21:26:32.109776 | orchestrator | 2025-09-27 21:26:32.109787 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:32.109798 | orchestrator | Saturday 27 September 2025 21:26:31 +0000 (0:00:00.694) 0:00:22.274 **** 2025-09-27 21:26:32.109808 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:26:32.109819 | orchestrator | 2025-09-27 21:26:32.109836 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:37.149689 | orchestrator | Saturday 27 September 2025 21:26:32 +0000 (0:00:00.172) 0:00:22.447 **** 2025-09-27 21:26:37.149776 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:26:37.149791 | orchestrator | 2025-09-27 21:26:37.149804 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:37.149815 | orchestrator | Saturday 27 September 2025 21:26:32 +0000 (0:00:00.163) 0:00:22.610 **** 2025-09-27 21:26:37.149826 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:26:37.149837 | orchestrator | 2025-09-27 21:26:37.149848 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:37.149859 | orchestrator | Saturday 27 September 2025 21:26:32 +0000 (0:00:00.162) 0:00:22.773 **** 2025-09-27 21:26:37.149870 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:26:37.149881 | orchestrator | 2025-09-27 21:26:37.149905 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-27 21:26:37.149917 | orchestrator | Saturday 27 September 2025 21:26:32 +0000 (0:00:00.151) 0:00:22.924 **** 2025-09-27 21:26:37.149928 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-09-27 21:26:37.149939 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-09-27 21:26:37.149950 | orchestrator | 2025-09-27 21:26:37.149995 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-27 21:26:37.150006 | orchestrator | Saturday 27 September 2025 21:26:32 +0000 (0:00:00.258) 0:00:23.182 **** 2025-09-27 21:26:37.150069 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:26:37.150081 | orchestrator | 2025-09-27 21:26:37.150092 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-27 21:26:37.150103 | orchestrator | Saturday 27 September 2025 21:26:32 +0000 (0:00:00.107) 0:00:23.290 **** 2025-09-27 21:26:37.150114 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:26:37.150125 | orchestrator | 2025-09-27 21:26:37.150136 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-27 21:26:37.150146 | orchestrator | Saturday 27 September 2025 21:26:33 +0000 (0:00:00.102) 0:00:23.393 **** 2025-09-27 21:26:37.150157 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:26:37.150168 | orchestrator | 2025-09-27 21:26:37.150179 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-27 21:26:37.150190 | orchestrator | Saturday 27 September 2025 21:26:33 +0000 (0:00:00.106) 0:00:23.499 **** 2025-09-27 21:26:37.150200 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:26:37.150212 | orchestrator | 2025-09-27 21:26:37.150223 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-27 21:26:37.150234 | orchestrator | Saturday 27 September 2025 21:26:33 +0000 (0:00:00.097) 0:00:23.596 **** 2025-09-27 21:26:37.150245 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'de74169a-f069-5642-ad17-f2f17c514bb2'}}) 2025-09-27 21:26:37.150258 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '364a105c-f104-5917-80d0-e8f8560ea5f8'}}) 2025-09-27 21:26:37.150270 | orchestrator | 2025-09-27 21:26:37.150283 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-27 21:26:37.150314 | orchestrator | Saturday 27 September 2025 21:26:33 +0000 (0:00:00.152) 0:00:23.749 **** 2025-09-27 21:26:37.150328 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'de74169a-f069-5642-ad17-f2f17c514bb2'}})  2025-09-27 21:26:37.150341 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '364a105c-f104-5917-80d0-e8f8560ea5f8'}})  2025-09-27 21:26:37.150353 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:26:37.150364 | orchestrator | 2025-09-27 21:26:37.150376 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-27 21:26:37.150389 | orchestrator | Saturday 27 September 2025 21:26:33 +0000 (0:00:00.124) 0:00:23.874 **** 2025-09-27 21:26:37.150401 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'de74169a-f069-5642-ad17-f2f17c514bb2'}})  2025-09-27 21:26:37.150413 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '364a105c-f104-5917-80d0-e8f8560ea5f8'}})  2025-09-27 21:26:37.150425 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:26:37.150437 | orchestrator | 2025-09-27 21:26:37.150449 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-27 21:26:37.150460 | orchestrator | Saturday 27 September 2025 21:26:33 +0000 (0:00:00.127) 0:00:24.002 **** 2025-09-27 21:26:37.150473 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'de74169a-f069-5642-ad17-f2f17c514bb2'}})  2025-09-27 21:26:37.150485 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '364a105c-f104-5917-80d0-e8f8560ea5f8'}})  2025-09-27 21:26:37.150497 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:26:37.150509 | orchestrator | 2025-09-27 21:26:37.150521 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-27 21:26:37.150533 | orchestrator | Saturday 27 September 2025 21:26:33 +0000 (0:00:00.115) 0:00:24.117 **** 2025-09-27 21:26:37.150545 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:26:37.150557 | orchestrator | 2025-09-27 21:26:37.150569 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-27 21:26:37.150582 | orchestrator | Saturday 27 September 2025 21:26:33 +0000 (0:00:00.109) 0:00:24.227 **** 2025-09-27 21:26:37.150594 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:26:37.150606 | orchestrator | 2025-09-27 21:26:37.150617 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-27 21:26:37.150628 | orchestrator | Saturday 27 September 2025 21:26:33 +0000 (0:00:00.105) 0:00:24.333 **** 2025-09-27 21:26:37.150638 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:26:37.150649 | orchestrator | 2025-09-27 21:26:37.150675 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-27 21:26:37.150687 | orchestrator | Saturday 27 September 2025 21:26:34 +0000 (0:00:00.100) 0:00:24.434 **** 2025-09-27 21:26:37.150698 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:26:37.150709 | orchestrator | 2025-09-27 21:26:37.150719 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-27 21:26:37.150730 | orchestrator | Saturday 27 September 2025 21:26:34 +0000 (0:00:00.235) 0:00:24.669 **** 2025-09-27 21:26:37.150741 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:26:37.150752 | orchestrator | 2025-09-27 21:26:37.150763 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-27 21:26:37.150773 | orchestrator | Saturday 27 September 2025 21:26:34 +0000 (0:00:00.111) 0:00:24.781 **** 2025-09-27 21:26:37.150784 | orchestrator | ok: [testbed-node-4] => { 2025-09-27 21:26:37.150795 | orchestrator |  "ceph_osd_devices": { 2025-09-27 21:26:37.150806 | orchestrator |  "sdb": { 2025-09-27 21:26:37.150817 | orchestrator |  "osd_lvm_uuid": "de74169a-f069-5642-ad17-f2f17c514bb2" 2025-09-27 21:26:37.150827 | orchestrator |  }, 2025-09-27 21:26:37.150838 | orchestrator |  "sdc": { 2025-09-27 21:26:37.150856 | orchestrator |  "osd_lvm_uuid": "364a105c-f104-5917-80d0-e8f8560ea5f8" 2025-09-27 21:26:37.150866 | orchestrator |  } 2025-09-27 21:26:37.150877 | orchestrator |  } 2025-09-27 21:26:37.150888 | orchestrator | } 2025-09-27 21:26:37.150899 | orchestrator | 2025-09-27 21:26:37.150910 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-27 21:26:37.150920 | orchestrator | Saturday 27 September 2025 21:26:34 +0000 (0:00:00.115) 0:00:24.896 **** 2025-09-27 21:26:37.150931 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:26:37.150942 | orchestrator | 2025-09-27 21:26:37.150977 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-27 21:26:37.150989 | orchestrator | Saturday 27 September 2025 21:26:34 +0000 (0:00:00.104) 0:00:25.001 **** 2025-09-27 21:26:37.150999 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:26:37.151010 | orchestrator | 2025-09-27 21:26:37.151021 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-27 21:26:37.151032 | orchestrator | Saturday 27 September 2025 21:26:34 +0000 (0:00:00.108) 0:00:25.110 **** 2025-09-27 21:26:37.151042 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:26:37.151053 | orchestrator | 2025-09-27 21:26:37.151064 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-27 21:26:37.151075 | orchestrator | Saturday 27 September 2025 21:26:34 +0000 (0:00:00.111) 0:00:25.221 **** 2025-09-27 21:26:37.151085 | orchestrator | changed: [testbed-node-4] => { 2025-09-27 21:26:37.151096 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-27 21:26:37.151107 | orchestrator |  "ceph_osd_devices": { 2025-09-27 21:26:37.151117 | orchestrator |  "sdb": { 2025-09-27 21:26:37.151128 | orchestrator |  "osd_lvm_uuid": "de74169a-f069-5642-ad17-f2f17c514bb2" 2025-09-27 21:26:37.151143 | orchestrator |  }, 2025-09-27 21:26:37.151154 | orchestrator |  "sdc": { 2025-09-27 21:26:37.151165 | orchestrator |  "osd_lvm_uuid": "364a105c-f104-5917-80d0-e8f8560ea5f8" 2025-09-27 21:26:37.151176 | orchestrator |  } 2025-09-27 21:26:37.151186 | orchestrator |  }, 2025-09-27 21:26:37.151197 | orchestrator |  "lvm_volumes": [ 2025-09-27 21:26:37.151208 | orchestrator |  { 2025-09-27 21:26:37.151218 | orchestrator |  "data": "osd-block-de74169a-f069-5642-ad17-f2f17c514bb2", 2025-09-27 21:26:37.151229 | orchestrator |  "data_vg": "ceph-de74169a-f069-5642-ad17-f2f17c514bb2" 2025-09-27 21:26:37.151240 | orchestrator |  }, 2025-09-27 21:26:37.151251 | orchestrator |  { 2025-09-27 21:26:37.151261 | orchestrator |  "data": "osd-block-364a105c-f104-5917-80d0-e8f8560ea5f8", 2025-09-27 21:26:37.151272 | orchestrator |  "data_vg": "ceph-364a105c-f104-5917-80d0-e8f8560ea5f8" 2025-09-27 21:26:37.151283 | orchestrator |  } 2025-09-27 21:26:37.151293 | orchestrator |  ] 2025-09-27 21:26:37.151304 | orchestrator |  } 2025-09-27 21:26:37.151315 | orchestrator | } 2025-09-27 21:26:37.151325 | orchestrator | 2025-09-27 21:26:37.151336 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-27 21:26:37.151347 | orchestrator | Saturday 27 September 2025 21:26:35 +0000 (0:00:00.164) 0:00:25.386 **** 2025-09-27 21:26:37.151358 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-27 21:26:37.151368 | orchestrator | 2025-09-27 21:26:37.151379 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-27 21:26:37.151390 | orchestrator | 2025-09-27 21:26:37.151401 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-27 21:26:37.151412 | orchestrator | Saturday 27 September 2025 21:26:35 +0000 (0:00:00.878) 0:00:26.264 **** 2025-09-27 21:26:37.151422 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-27 21:26:37.151433 | orchestrator | 2025-09-27 21:26:37.151444 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-27 21:26:37.151454 | orchestrator | Saturday 27 September 2025 21:26:36 +0000 (0:00:00.364) 0:00:26.629 **** 2025-09-27 21:26:37.151472 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:26:37.151483 | orchestrator | 2025-09-27 21:26:37.151494 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:37.151505 | orchestrator | Saturday 27 September 2025 21:26:36 +0000 (0:00:00.494) 0:00:27.124 **** 2025-09-27 21:26:37.151515 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-27 21:26:37.151526 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-27 21:26:37.151537 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-27 21:26:37.151548 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-27 21:26:37.151558 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-27 21:26:37.151569 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-27 21:26:37.151586 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-27 21:26:44.650711 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-27 21:26:44.650783 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-27 21:26:44.650796 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-27 21:26:44.650808 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-27 21:26:44.650819 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-27 21:26:44.650830 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-27 21:26:44.650841 | orchestrator | 2025-09-27 21:26:44.650853 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:44.650864 | orchestrator | Saturday 27 September 2025 21:26:37 +0000 (0:00:00.357) 0:00:27.481 **** 2025-09-27 21:26:44.650875 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:26:44.650887 | orchestrator | 2025-09-27 21:26:44.650898 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:44.650909 | orchestrator | Saturday 27 September 2025 21:26:37 +0000 (0:00:00.185) 0:00:27.667 **** 2025-09-27 21:26:44.650920 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:26:44.650931 | orchestrator | 2025-09-27 21:26:44.650942 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:44.650985 | orchestrator | Saturday 27 September 2025 21:26:37 +0000 (0:00:00.175) 0:00:27.842 **** 2025-09-27 21:26:44.651006 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:26:44.651023 | orchestrator | 2025-09-27 21:26:44.651041 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:44.651061 | orchestrator | Saturday 27 September 2025 21:26:37 +0000 (0:00:00.181) 0:00:28.024 **** 2025-09-27 21:26:44.651073 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:26:44.651084 | orchestrator | 2025-09-27 21:26:44.651095 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:44.651105 | orchestrator | Saturday 27 September 2025 21:26:37 +0000 (0:00:00.181) 0:00:28.206 **** 2025-09-27 21:26:44.651116 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:26:44.651127 | orchestrator | 2025-09-27 21:26:44.651138 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:44.651149 | orchestrator | Saturday 27 September 2025 21:26:38 +0000 (0:00:00.204) 0:00:28.410 **** 2025-09-27 21:26:44.651159 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:26:44.651170 | orchestrator | 2025-09-27 21:26:44.651181 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:44.651192 | orchestrator | Saturday 27 September 2025 21:26:38 +0000 (0:00:00.180) 0:00:28.590 **** 2025-09-27 21:26:44.651219 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:26:44.651261 | orchestrator | 2025-09-27 21:26:44.651275 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:44.651288 | orchestrator | Saturday 27 September 2025 21:26:38 +0000 (0:00:00.207) 0:00:28.797 **** 2025-09-27 21:26:44.651299 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:26:44.651311 | orchestrator | 2025-09-27 21:26:44.651336 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:44.651348 | orchestrator | Saturday 27 September 2025 21:26:38 +0000 (0:00:00.197) 0:00:28.995 **** 2025-09-27 21:26:44.651361 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f) 2025-09-27 21:26:44.651374 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f) 2025-09-27 21:26:44.651386 | orchestrator | 2025-09-27 21:26:44.651399 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:44.651411 | orchestrator | Saturday 27 September 2025 21:26:39 +0000 (0:00:00.573) 0:00:29.568 **** 2025-09-27 21:26:44.651423 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3ec8be80-0eed-4819-876a-b80c0ef8150e) 2025-09-27 21:26:44.651436 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3ec8be80-0eed-4819-876a-b80c0ef8150e) 2025-09-27 21:26:44.651447 | orchestrator | 2025-09-27 21:26:44.651459 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:44.651471 | orchestrator | Saturday 27 September 2025 21:26:39 +0000 (0:00:00.616) 0:00:30.185 **** 2025-09-27 21:26:44.651483 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_89df2119-9fed-4bd7-9779-2bc26187d4ad) 2025-09-27 21:26:44.651495 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_89df2119-9fed-4bd7-9779-2bc26187d4ad) 2025-09-27 21:26:44.651507 | orchestrator | 2025-09-27 21:26:44.651519 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:44.651531 | orchestrator | Saturday 27 September 2025 21:26:40 +0000 (0:00:00.405) 0:00:30.591 **** 2025-09-27 21:26:44.651543 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fb7d096e-2368-48a2-bece-3fcee17790fa) 2025-09-27 21:26:44.651555 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fb7d096e-2368-48a2-bece-3fcee17790fa) 2025-09-27 21:26:44.651567 | orchestrator | 2025-09-27 21:26:44.651579 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:26:44.651590 | orchestrator | Saturday 27 September 2025 21:26:40 +0000 (0:00:00.407) 0:00:30.998 **** 2025-09-27 21:26:44.651603 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-27 21:26:44.651615 | orchestrator | 2025-09-27 21:26:44.651627 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:44.651638 | orchestrator | Saturday 27 September 2025 21:26:40 +0000 (0:00:00.323) 0:00:31.322 **** 2025-09-27 21:26:44.651663 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-27 21:26:44.651675 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-27 21:26:44.651686 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-27 21:26:44.651697 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-27 21:26:44.651708 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-27 21:26:44.651719 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-27 21:26:44.651729 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-27 21:26:44.651740 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-27 21:26:44.651759 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-27 21:26:44.651799 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-27 21:26:44.651824 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-27 21:26:44.651843 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-27 21:26:44.651862 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-27 21:26:44.651880 | orchestrator | 2025-09-27 21:26:44.651901 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:44.651918 | orchestrator | Saturday 27 September 2025 21:26:41 +0000 (0:00:00.454) 0:00:31.776 **** 2025-09-27 21:26:44.651929 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:26:44.651940 | orchestrator | 2025-09-27 21:26:44.651983 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:44.651994 | orchestrator | Saturday 27 September 2025 21:26:41 +0000 (0:00:00.203) 0:00:31.980 **** 2025-09-27 21:26:44.652005 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:26:44.652016 | orchestrator | 2025-09-27 21:26:44.652027 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:44.652038 | orchestrator | Saturday 27 September 2025 21:26:41 +0000 (0:00:00.194) 0:00:32.174 **** 2025-09-27 21:26:44.652049 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:26:44.652060 | orchestrator | 2025-09-27 21:26:44.652071 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:44.652082 | orchestrator | Saturday 27 September 2025 21:26:42 +0000 (0:00:00.213) 0:00:32.388 **** 2025-09-27 21:26:44.652093 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:26:44.652104 | orchestrator | 2025-09-27 21:26:44.652114 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:44.652125 | orchestrator | Saturday 27 September 2025 21:26:42 +0000 (0:00:00.192) 0:00:32.581 **** 2025-09-27 21:26:44.652136 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:26:44.652147 | orchestrator | 2025-09-27 21:26:44.652158 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:44.652169 | orchestrator | Saturday 27 September 2025 21:26:42 +0000 (0:00:00.172) 0:00:32.753 **** 2025-09-27 21:26:44.652180 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:26:44.652190 | orchestrator | 2025-09-27 21:26:44.652201 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:44.652212 | orchestrator | Saturday 27 September 2025 21:26:42 +0000 (0:00:00.508) 0:00:33.262 **** 2025-09-27 21:26:44.652223 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:26:44.652234 | orchestrator | 2025-09-27 21:26:44.652245 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:44.652256 | orchestrator | Saturday 27 September 2025 21:26:43 +0000 (0:00:00.175) 0:00:33.437 **** 2025-09-27 21:26:44.652266 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:26:44.652277 | orchestrator | 2025-09-27 21:26:44.652288 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:44.652299 | orchestrator | Saturday 27 September 2025 21:26:43 +0000 (0:00:00.183) 0:00:33.621 **** 2025-09-27 21:26:44.652310 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-27 21:26:44.652321 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-27 21:26:44.652332 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-27 21:26:44.652343 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-27 21:26:44.652354 | orchestrator | 2025-09-27 21:26:44.652365 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:44.652376 | orchestrator | Saturday 27 September 2025 21:26:43 +0000 (0:00:00.632) 0:00:34.253 **** 2025-09-27 21:26:44.652386 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:26:44.652397 | orchestrator | 2025-09-27 21:26:44.652408 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:44.652429 | orchestrator | Saturday 27 September 2025 21:26:44 +0000 (0:00:00.179) 0:00:34.432 **** 2025-09-27 21:26:44.652440 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:26:44.652451 | orchestrator | 2025-09-27 21:26:44.652462 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:44.652473 | orchestrator | Saturday 27 September 2025 21:26:44 +0000 (0:00:00.178) 0:00:34.610 **** 2025-09-27 21:26:44.652484 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:26:44.652495 | orchestrator | 2025-09-27 21:26:44.652506 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:26:44.652516 | orchestrator | Saturday 27 September 2025 21:26:44 +0000 (0:00:00.180) 0:00:34.791 **** 2025-09-27 21:26:44.652534 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:26:44.652545 | orchestrator | 2025-09-27 21:26:44.652557 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-27 21:26:44.652577 | orchestrator | Saturday 27 September 2025 21:26:44 +0000 (0:00:00.193) 0:00:34.984 **** 2025-09-27 21:26:48.309367 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-09-27 21:26:48.309455 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-09-27 21:26:48.309469 | orchestrator | 2025-09-27 21:26:48.309482 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-27 21:26:48.309493 | orchestrator | Saturday 27 September 2025 21:26:44 +0000 (0:00:00.175) 0:00:35.160 **** 2025-09-27 21:26:48.309504 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:26:48.309515 | orchestrator | 2025-09-27 21:26:48.309543 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-27 21:26:48.309554 | orchestrator | Saturday 27 September 2025 21:26:44 +0000 (0:00:00.117) 0:00:35.277 **** 2025-09-27 21:26:48.309565 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:26:48.309576 | orchestrator | 2025-09-27 21:26:48.309587 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-27 21:26:48.309598 | orchestrator | Saturday 27 September 2025 21:26:45 +0000 (0:00:00.107) 0:00:35.385 **** 2025-09-27 21:26:48.309608 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:26:48.309619 | orchestrator | 2025-09-27 21:26:48.309630 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-27 21:26:48.309669 | orchestrator | Saturday 27 September 2025 21:26:45 +0000 (0:00:00.103) 0:00:35.488 **** 2025-09-27 21:26:48.309682 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:26:48.309694 | orchestrator | 2025-09-27 21:26:48.309705 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-27 21:26:48.309716 | orchestrator | Saturday 27 September 2025 21:26:45 +0000 (0:00:00.299) 0:00:35.788 **** 2025-09-27 21:26:48.309735 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5f61d8e2-65b7-57ca-8dcb-2a964e525246'}}) 2025-09-27 21:26:48.309756 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2897d5b9-8afd-5dc0-8795-bd1d3af2960f'}}) 2025-09-27 21:26:48.309775 | orchestrator | 2025-09-27 21:26:48.309796 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-27 21:26:48.309817 | orchestrator | Saturday 27 September 2025 21:26:45 +0000 (0:00:00.152) 0:00:35.940 **** 2025-09-27 21:26:48.309838 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5f61d8e2-65b7-57ca-8dcb-2a964e525246'}})  2025-09-27 21:26:48.309852 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2897d5b9-8afd-5dc0-8795-bd1d3af2960f'}})  2025-09-27 21:26:48.309863 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:26:48.309874 | orchestrator | 2025-09-27 21:26:48.309905 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-27 21:26:48.309917 | orchestrator | Saturday 27 September 2025 21:26:45 +0000 (0:00:00.128) 0:00:36.069 **** 2025-09-27 21:26:48.309928 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5f61d8e2-65b7-57ca-8dcb-2a964e525246'}})  2025-09-27 21:26:48.310003 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2897d5b9-8afd-5dc0-8795-bd1d3af2960f'}})  2025-09-27 21:26:48.310051 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:26:48.310063 | orchestrator | 2025-09-27 21:26:48.310092 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-27 21:26:48.310104 | orchestrator | Saturday 27 September 2025 21:26:45 +0000 (0:00:00.141) 0:00:36.210 **** 2025-09-27 21:26:48.310115 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5f61d8e2-65b7-57ca-8dcb-2a964e525246'}})  2025-09-27 21:26:48.310126 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2897d5b9-8afd-5dc0-8795-bd1d3af2960f'}})  2025-09-27 21:26:48.310136 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:26:48.310147 | orchestrator | 2025-09-27 21:26:48.310158 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-27 21:26:48.310169 | orchestrator | Saturday 27 September 2025 21:26:46 +0000 (0:00:00.144) 0:00:36.355 **** 2025-09-27 21:26:48.310190 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:26:48.310202 | orchestrator | 2025-09-27 21:26:48.310212 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-27 21:26:48.310223 | orchestrator | Saturday 27 September 2025 21:26:46 +0000 (0:00:00.109) 0:00:36.465 **** 2025-09-27 21:26:48.310234 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:26:48.310245 | orchestrator | 2025-09-27 21:26:48.310256 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-27 21:26:48.310267 | orchestrator | Saturday 27 September 2025 21:26:46 +0000 (0:00:00.121) 0:00:36.586 **** 2025-09-27 21:26:48.310278 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:26:48.310288 | orchestrator | 2025-09-27 21:26:48.310299 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-27 21:26:48.310310 | orchestrator | Saturday 27 September 2025 21:26:46 +0000 (0:00:00.115) 0:00:36.701 **** 2025-09-27 21:26:48.310321 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:26:48.310345 | orchestrator | 2025-09-27 21:26:48.310356 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-27 21:26:48.310367 | orchestrator | Saturday 27 September 2025 21:26:46 +0000 (0:00:00.136) 0:00:36.838 **** 2025-09-27 21:26:48.310378 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:26:48.310389 | orchestrator | 2025-09-27 21:26:48.310400 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-27 21:26:48.310411 | orchestrator | Saturday 27 September 2025 21:26:46 +0000 (0:00:00.122) 0:00:36.960 **** 2025-09-27 21:26:48.310421 | orchestrator | ok: [testbed-node-5] => { 2025-09-27 21:26:48.310432 | orchestrator |  "ceph_osd_devices": { 2025-09-27 21:26:48.310443 | orchestrator |  "sdb": { 2025-09-27 21:26:48.310454 | orchestrator |  "osd_lvm_uuid": "5f61d8e2-65b7-57ca-8dcb-2a964e525246" 2025-09-27 21:26:48.310481 | orchestrator |  }, 2025-09-27 21:26:48.310493 | orchestrator |  "sdc": { 2025-09-27 21:26:48.310504 | orchestrator |  "osd_lvm_uuid": "2897d5b9-8afd-5dc0-8795-bd1d3af2960f" 2025-09-27 21:26:48.310515 | orchestrator |  } 2025-09-27 21:26:48.310526 | orchestrator |  } 2025-09-27 21:26:48.310537 | orchestrator | } 2025-09-27 21:26:48.310548 | orchestrator | 2025-09-27 21:26:48.310559 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-27 21:26:48.310570 | orchestrator | Saturday 27 September 2025 21:26:46 +0000 (0:00:00.108) 0:00:37.069 **** 2025-09-27 21:26:48.310581 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:26:48.310592 | orchestrator | 2025-09-27 21:26:48.310603 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-27 21:26:48.310614 | orchestrator | Saturday 27 September 2025 21:26:46 +0000 (0:00:00.102) 0:00:37.172 **** 2025-09-27 21:26:48.310625 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:26:48.310636 | orchestrator | 2025-09-27 21:26:48.310647 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-27 21:26:48.310667 | orchestrator | Saturday 27 September 2025 21:26:47 +0000 (0:00:00.268) 0:00:37.441 **** 2025-09-27 21:26:48.310678 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:26:48.310689 | orchestrator | 2025-09-27 21:26:48.310700 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-27 21:26:48.310711 | orchestrator | Saturday 27 September 2025 21:26:47 +0000 (0:00:00.176) 0:00:37.617 **** 2025-09-27 21:26:48.310721 | orchestrator | changed: [testbed-node-5] => { 2025-09-27 21:26:48.310732 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-27 21:26:48.310743 | orchestrator |  "ceph_osd_devices": { 2025-09-27 21:26:48.310754 | orchestrator |  "sdb": { 2025-09-27 21:26:48.310766 | orchestrator |  "osd_lvm_uuid": "5f61d8e2-65b7-57ca-8dcb-2a964e525246" 2025-09-27 21:26:48.310787 | orchestrator |  }, 2025-09-27 21:26:48.310807 | orchestrator |  "sdc": { 2025-09-27 21:26:48.310827 | orchestrator |  "osd_lvm_uuid": "2897d5b9-8afd-5dc0-8795-bd1d3af2960f" 2025-09-27 21:26:48.310844 | orchestrator |  } 2025-09-27 21:26:48.310855 | orchestrator |  }, 2025-09-27 21:26:48.310866 | orchestrator |  "lvm_volumes": [ 2025-09-27 21:26:48.310876 | orchestrator |  { 2025-09-27 21:26:48.310887 | orchestrator |  "data": "osd-block-5f61d8e2-65b7-57ca-8dcb-2a964e525246", 2025-09-27 21:26:48.310898 | orchestrator |  "data_vg": "ceph-5f61d8e2-65b7-57ca-8dcb-2a964e525246" 2025-09-27 21:26:48.310909 | orchestrator |  }, 2025-09-27 21:26:48.310919 | orchestrator |  { 2025-09-27 21:26:48.310930 | orchestrator |  "data": "osd-block-2897d5b9-8afd-5dc0-8795-bd1d3af2960f", 2025-09-27 21:26:48.310985 | orchestrator |  "data_vg": "ceph-2897d5b9-8afd-5dc0-8795-bd1d3af2960f" 2025-09-27 21:26:48.310999 | orchestrator |  } 2025-09-27 21:26:48.311010 | orchestrator |  ] 2025-09-27 21:26:48.311021 | orchestrator |  } 2025-09-27 21:26:48.311035 | orchestrator | } 2025-09-27 21:26:48.311046 | orchestrator | 2025-09-27 21:26:48.311057 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-27 21:26:48.311068 | orchestrator | Saturday 27 September 2025 21:26:47 +0000 (0:00:00.188) 0:00:37.805 **** 2025-09-27 21:26:48.311078 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-27 21:26:48.311089 | orchestrator | 2025-09-27 21:26:48.311116 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:26:48.311137 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-27 21:26:48.311150 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-27 21:26:48.311161 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-27 21:26:48.311172 | orchestrator | 2025-09-27 21:26:48.311183 | orchestrator | 2025-09-27 21:26:48.311194 | orchestrator | 2025-09-27 21:26:48.311204 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:26:48.311215 | orchestrator | Saturday 27 September 2025 21:26:48 +0000 (0:00:00.824) 0:00:38.630 **** 2025-09-27 21:26:48.311245 | orchestrator | =============================================================================== 2025-09-27 21:26:48.311257 | orchestrator | Write configuration file ------------------------------------------------ 3.47s 2025-09-27 21:26:48.311268 | orchestrator | Add known partitions to the list of available block devices ------------- 1.24s 2025-09-27 21:26:48.311278 | orchestrator | Add known links to the list of available block devices ------------------ 1.09s 2025-09-27 21:26:48.311289 | orchestrator | Add known partitions to the list of available block devices ------------- 1.00s 2025-09-27 21:26:48.311300 | orchestrator | Get initial list of available block devices ----------------------------- 0.94s 2025-09-27 21:26:48.311319 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.85s 2025-09-27 21:26:48.311330 | orchestrator | Print configuration data ------------------------------------------------ 0.71s 2025-09-27 21:26:48.311341 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2025-09-27 21:26:48.311352 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.68s 2025-09-27 21:26:48.311362 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2025-09-27 21:26:48.311373 | orchestrator | Add known partitions to the list of available block devices ------------- 0.63s 2025-09-27 21:26:48.311384 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2025-09-27 21:26:48.311395 | orchestrator | Add known partitions to the list of available block devices ------------- 0.62s 2025-09-27 21:26:48.311406 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2025-09-27 21:26:48.311437 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.61s 2025-09-27 21:26:48.550099 | orchestrator | Add known links to the list of available block devices ------------------ 0.59s 2025-09-27 21:26:48.550181 | orchestrator | Add known links to the list of available block devices ------------------ 0.59s 2025-09-27 21:26:48.550195 | orchestrator | Add known links to the list of available block devices ------------------ 0.57s 2025-09-27 21:26:48.550207 | orchestrator | Add known links to the list of available block devices ------------------ 0.55s 2025-09-27 21:26:48.550218 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.54s 2025-09-27 21:27:10.826827 | orchestrator | 2025-09-27 21:27:10 | INFO  | Task d1c9323b-dfd9-4903-97b4-28343f0db49d (sync inventory) is running in background. Output coming soon. 2025-09-27 21:27:34.700255 | orchestrator | 2025-09-27 21:27:11 | INFO  | Starting group_vars file reorganization 2025-09-27 21:27:34.700348 | orchestrator | 2025-09-27 21:27:11 | INFO  | Moved 0 file(s) to their respective directories 2025-09-27 21:27:34.700363 | orchestrator | 2025-09-27 21:27:11 | INFO  | Group_vars file reorganization completed 2025-09-27 21:27:34.700375 | orchestrator | 2025-09-27 21:27:14 | INFO  | Starting variable preparation from inventory 2025-09-27 21:27:34.700387 | orchestrator | 2025-09-27 21:27:17 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-09-27 21:27:34.700398 | orchestrator | 2025-09-27 21:27:17 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-09-27 21:27:34.700408 | orchestrator | 2025-09-27 21:27:17 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-09-27 21:27:34.700419 | orchestrator | 2025-09-27 21:27:17 | INFO  | 3 file(s) written, 6 host(s) processed 2025-09-27 21:27:34.700430 | orchestrator | 2025-09-27 21:27:17 | INFO  | Variable preparation completed 2025-09-27 21:27:34.700441 | orchestrator | 2025-09-27 21:27:18 | INFO  | Starting inventory overwrite handling 2025-09-27 21:27:34.700452 | orchestrator | 2025-09-27 21:27:18 | INFO  | Handling group overwrites in 99-overwrite 2025-09-27 21:27:34.700463 | orchestrator | 2025-09-27 21:27:18 | INFO  | Removing group frr:children from 60-generic 2025-09-27 21:27:34.700474 | orchestrator | 2025-09-27 21:27:18 | INFO  | Removing group storage:children from 50-kolla 2025-09-27 21:27:34.700485 | orchestrator | 2025-09-27 21:27:18 | INFO  | Removing group netbird:children from 50-infrastructure 2025-09-27 21:27:34.700496 | orchestrator | 2025-09-27 21:27:18 | INFO  | Removing group ceph-rgw from 50-ceph 2025-09-27 21:27:34.700507 | orchestrator | 2025-09-27 21:27:18 | INFO  | Removing group ceph-mds from 50-ceph 2025-09-27 21:27:34.700518 | orchestrator | 2025-09-27 21:27:18 | INFO  | Handling group overwrites in 20-roles 2025-09-27 21:27:34.700528 | orchestrator | 2025-09-27 21:27:18 | INFO  | Removing group k3s_node from 50-infrastructure 2025-09-27 21:27:34.700563 | orchestrator | 2025-09-27 21:27:18 | INFO  | Removed 6 group(s) in total 2025-09-27 21:27:34.700575 | orchestrator | 2025-09-27 21:27:18 | INFO  | Inventory overwrite handling completed 2025-09-27 21:27:34.700586 | orchestrator | 2025-09-27 21:27:19 | INFO  | Starting merge of inventory files 2025-09-27 21:27:34.700596 | orchestrator | 2025-09-27 21:27:19 | INFO  | Inventory files merged successfully 2025-09-27 21:27:34.700607 | orchestrator | 2025-09-27 21:27:23 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-09-27 21:27:34.700617 | orchestrator | 2025-09-27 21:27:33 | INFO  | Successfully wrote ClusterShell configuration 2025-09-27 21:27:34.700628 | orchestrator | [master 3c7d7ac] 2025-09-27-21-27 2025-09-27 21:27:34.700640 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-09-27 21:27:36.291362 | orchestrator | 2025-09-27 21:27:36 | INFO  | Task cb831706-a191-496f-9f52-fe860edacf1e (ceph-create-lvm-devices) was prepared for execution. 2025-09-27 21:27:36.291441 | orchestrator | 2025-09-27 21:27:36 | INFO  | It takes a moment until task cb831706-a191-496f-9f52-fe860edacf1e (ceph-create-lvm-devices) has been started and output is visible here. 2025-09-27 21:27:45.969527 | orchestrator | 2025-09-27 21:27:45.969670 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-27 21:27:45.969701 | orchestrator | 2025-09-27 21:27:45.969722 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-27 21:27:45.969740 | orchestrator | Saturday 27 September 2025 21:27:39 +0000 (0:00:00.232) 0:00:00.232 **** 2025-09-27 21:27:45.969761 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-27 21:27:45.969781 | orchestrator | 2025-09-27 21:27:45.969800 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-27 21:27:45.969819 | orchestrator | Saturday 27 September 2025 21:27:39 +0000 (0:00:00.229) 0:00:00.462 **** 2025-09-27 21:27:45.969838 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:27:45.969858 | orchestrator | 2025-09-27 21:27:45.969918 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:27:45.969940 | orchestrator | Saturday 27 September 2025 21:27:39 +0000 (0:00:00.197) 0:00:00.660 **** 2025-09-27 21:27:45.969957 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-27 21:27:45.969977 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-27 21:27:45.969995 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-27 21:27:45.970014 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-27 21:27:45.970103 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-27 21:27:45.970125 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-27 21:27:45.970146 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-27 21:27:45.970165 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-27 21:27:45.970179 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-27 21:27:45.970192 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-27 21:27:45.970204 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-27 21:27:45.970216 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-27 21:27:45.970229 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-27 21:27:45.970241 | orchestrator | 2025-09-27 21:27:45.970253 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:27:45.970329 | orchestrator | Saturday 27 September 2025 21:27:40 +0000 (0:00:00.391) 0:00:01.051 **** 2025-09-27 21:27:45.970343 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:45.970355 | orchestrator | 2025-09-27 21:27:45.970365 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:27:45.970392 | orchestrator | Saturday 27 September 2025 21:27:40 +0000 (0:00:00.349) 0:00:01.400 **** 2025-09-27 21:27:45.970412 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:45.970429 | orchestrator | 2025-09-27 21:27:45.970446 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:27:45.970465 | orchestrator | Saturday 27 September 2025 21:27:40 +0000 (0:00:00.183) 0:00:01.583 **** 2025-09-27 21:27:45.970490 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:45.970511 | orchestrator | 2025-09-27 21:27:45.970530 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:27:45.970546 | orchestrator | Saturday 27 September 2025 21:27:40 +0000 (0:00:00.185) 0:00:01.769 **** 2025-09-27 21:27:45.970557 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:45.970568 | orchestrator | 2025-09-27 21:27:45.970579 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:27:45.970590 | orchestrator | Saturday 27 September 2025 21:27:41 +0000 (0:00:00.185) 0:00:01.954 **** 2025-09-27 21:27:45.970601 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:45.970612 | orchestrator | 2025-09-27 21:27:45.970623 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:27:45.970634 | orchestrator | Saturday 27 September 2025 21:27:41 +0000 (0:00:00.178) 0:00:02.133 **** 2025-09-27 21:27:45.970645 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:45.970655 | orchestrator | 2025-09-27 21:27:45.970666 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:27:45.970677 | orchestrator | Saturday 27 September 2025 21:27:41 +0000 (0:00:00.179) 0:00:02.312 **** 2025-09-27 21:27:45.970688 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:45.970699 | orchestrator | 2025-09-27 21:27:45.970710 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:27:45.970720 | orchestrator | Saturday 27 September 2025 21:27:41 +0000 (0:00:00.183) 0:00:02.496 **** 2025-09-27 21:27:45.970731 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:45.970742 | orchestrator | 2025-09-27 21:27:45.970753 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:27:45.970764 | orchestrator | Saturday 27 September 2025 21:27:41 +0000 (0:00:00.166) 0:00:02.662 **** 2025-09-27 21:27:45.970775 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163) 2025-09-27 21:27:45.970788 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163) 2025-09-27 21:27:45.970798 | orchestrator | 2025-09-27 21:27:45.970809 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:27:45.970820 | orchestrator | Saturday 27 September 2025 21:27:42 +0000 (0:00:00.362) 0:00:03.024 **** 2025-09-27 21:27:45.970854 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a92b9860-302a-4dfa-9a5b-f64375177990) 2025-09-27 21:27:45.970866 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a92b9860-302a-4dfa-9a5b-f64375177990) 2025-09-27 21:27:45.970909 | orchestrator | 2025-09-27 21:27:45.970930 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:27:45.970949 | orchestrator | Saturday 27 September 2025 21:27:42 +0000 (0:00:00.367) 0:00:03.391 **** 2025-09-27 21:27:45.970963 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1d27bfee-58fc-413a-aadf-ce708d3c762a) 2025-09-27 21:27:45.970974 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1d27bfee-58fc-413a-aadf-ce708d3c762a) 2025-09-27 21:27:45.970985 | orchestrator | 2025-09-27 21:27:45.970995 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:27:45.971018 | orchestrator | Saturday 27 September 2025 21:27:43 +0000 (0:00:00.480) 0:00:03.872 **** 2025-09-27 21:27:45.971028 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_57fc99d7-7aa7-4d8e-bac5-79cb8f64eb7c) 2025-09-27 21:27:45.971039 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_57fc99d7-7aa7-4d8e-bac5-79cb8f64eb7c) 2025-09-27 21:27:45.971050 | orchestrator | 2025-09-27 21:27:45.971066 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:27:45.971084 | orchestrator | Saturday 27 September 2025 21:27:43 +0000 (0:00:00.558) 0:00:04.431 **** 2025-09-27 21:27:45.971101 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-27 21:27:45.971119 | orchestrator | 2025-09-27 21:27:45.971136 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:27:45.971154 | orchestrator | Saturday 27 September 2025 21:27:44 +0000 (0:00:00.611) 0:00:05.043 **** 2025-09-27 21:27:45.971173 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-27 21:27:45.971190 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-27 21:27:45.971207 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-27 21:27:45.971218 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-27 21:27:45.971229 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-27 21:27:45.971240 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-27 21:27:45.971250 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-27 21:27:45.971261 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-27 21:27:45.971271 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-27 21:27:45.971282 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-27 21:27:45.971292 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-27 21:27:45.971303 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-27 21:27:45.971313 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-27 21:27:45.971324 | orchestrator | 2025-09-27 21:27:45.971335 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:27:45.971346 | orchestrator | Saturday 27 September 2025 21:27:44 +0000 (0:00:00.371) 0:00:05.415 **** 2025-09-27 21:27:45.971356 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:45.971367 | orchestrator | 2025-09-27 21:27:45.971378 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:27:45.971388 | orchestrator | Saturday 27 September 2025 21:27:44 +0000 (0:00:00.190) 0:00:05.605 **** 2025-09-27 21:27:45.971399 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:45.971410 | orchestrator | 2025-09-27 21:27:45.971420 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:27:45.971431 | orchestrator | Saturday 27 September 2025 21:27:44 +0000 (0:00:00.184) 0:00:05.789 **** 2025-09-27 21:27:45.971442 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:45.971452 | orchestrator | 2025-09-27 21:27:45.971463 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:27:45.971473 | orchestrator | Saturday 27 September 2025 21:27:45 +0000 (0:00:00.170) 0:00:05.960 **** 2025-09-27 21:27:45.971484 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:45.971494 | orchestrator | 2025-09-27 21:27:45.971505 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:27:45.971524 | orchestrator | Saturday 27 September 2025 21:27:45 +0000 (0:00:00.171) 0:00:06.132 **** 2025-09-27 21:27:45.971535 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:45.971545 | orchestrator | 2025-09-27 21:27:45.971556 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:27:45.971567 | orchestrator | Saturday 27 September 2025 21:27:45 +0000 (0:00:00.167) 0:00:06.299 **** 2025-09-27 21:27:45.971577 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:45.971588 | orchestrator | 2025-09-27 21:27:45.971599 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:27:45.971609 | orchestrator | Saturday 27 September 2025 21:27:45 +0000 (0:00:00.169) 0:00:06.469 **** 2025-09-27 21:27:45.971620 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:45.971631 | orchestrator | 2025-09-27 21:27:45.971641 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:27:45.971652 | orchestrator | Saturday 27 September 2025 21:27:45 +0000 (0:00:00.179) 0:00:06.648 **** 2025-09-27 21:27:45.971673 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:53.732178 | orchestrator | 2025-09-27 21:27:53.732301 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:27:53.732319 | orchestrator | Saturday 27 September 2025 21:27:45 +0000 (0:00:00.179) 0:00:06.828 **** 2025-09-27 21:27:53.732331 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-27 21:27:53.732344 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-27 21:27:53.732356 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-27 21:27:53.732366 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-27 21:27:53.732377 | orchestrator | 2025-09-27 21:27:53.732389 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:27:53.732400 | orchestrator | Saturday 27 September 2025 21:27:46 +0000 (0:00:00.965) 0:00:07.793 **** 2025-09-27 21:27:53.732411 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:53.732422 | orchestrator | 2025-09-27 21:27:53.732433 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:27:53.732444 | orchestrator | Saturday 27 September 2025 21:27:47 +0000 (0:00:00.182) 0:00:07.975 **** 2025-09-27 21:27:53.732455 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:53.732465 | orchestrator | 2025-09-27 21:27:53.732476 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:27:53.732487 | orchestrator | Saturday 27 September 2025 21:27:47 +0000 (0:00:00.177) 0:00:08.153 **** 2025-09-27 21:27:53.732498 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:53.732509 | orchestrator | 2025-09-27 21:27:53.732520 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:27:53.732531 | orchestrator | Saturday 27 September 2025 21:27:47 +0000 (0:00:00.184) 0:00:08.337 **** 2025-09-27 21:27:53.732542 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:53.732553 | orchestrator | 2025-09-27 21:27:53.732564 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-27 21:27:53.732575 | orchestrator | Saturday 27 September 2025 21:27:47 +0000 (0:00:00.190) 0:00:08.527 **** 2025-09-27 21:27:53.732586 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:53.732596 | orchestrator | 2025-09-27 21:27:53.732607 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-27 21:27:53.732618 | orchestrator | Saturday 27 September 2025 21:27:47 +0000 (0:00:00.109) 0:00:08.637 **** 2025-09-27 21:27:53.732630 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c2ef8475-4f12-50de-ab79-c841a7bfbe3d'}}) 2025-09-27 21:27:53.732641 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e5968580-5dd1-5a87-a5e5-bc9ba69f72d9'}}) 2025-09-27 21:27:53.732652 | orchestrator | 2025-09-27 21:27:53.732663 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-27 21:27:53.732674 | orchestrator | Saturday 27 September 2025 21:27:47 +0000 (0:00:00.170) 0:00:08.808 **** 2025-09-27 21:27:53.732712 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c2ef8475-4f12-50de-ab79-c841a7bfbe3d', 'data_vg': 'ceph-c2ef8475-4f12-50de-ab79-c841a7bfbe3d'}) 2025-09-27 21:27:53.732727 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9', 'data_vg': 'ceph-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9'}) 2025-09-27 21:27:53.732739 | orchestrator | 2025-09-27 21:27:53.732767 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-27 21:27:53.732784 | orchestrator | Saturday 27 September 2025 21:27:49 +0000 (0:00:01.888) 0:00:10.697 **** 2025-09-27 21:27:53.732798 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2ef8475-4f12-50de-ab79-c841a7bfbe3d', 'data_vg': 'ceph-c2ef8475-4f12-50de-ab79-c841a7bfbe3d'})  2025-09-27 21:27:53.732812 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9', 'data_vg': 'ceph-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9'})  2025-09-27 21:27:53.732824 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:53.732836 | orchestrator | 2025-09-27 21:27:53.732848 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-27 21:27:53.732860 | orchestrator | Saturday 27 September 2025 21:27:49 +0000 (0:00:00.136) 0:00:10.833 **** 2025-09-27 21:27:53.732892 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c2ef8475-4f12-50de-ab79-c841a7bfbe3d', 'data_vg': 'ceph-c2ef8475-4f12-50de-ab79-c841a7bfbe3d'}) 2025-09-27 21:27:53.732905 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9', 'data_vg': 'ceph-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9'}) 2025-09-27 21:27:53.732916 | orchestrator | 2025-09-27 21:27:53.732928 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-27 21:27:53.732940 | orchestrator | Saturday 27 September 2025 21:27:51 +0000 (0:00:01.457) 0:00:12.290 **** 2025-09-27 21:27:53.732952 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2ef8475-4f12-50de-ab79-c841a7bfbe3d', 'data_vg': 'ceph-c2ef8475-4f12-50de-ab79-c841a7bfbe3d'})  2025-09-27 21:27:53.732965 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9', 'data_vg': 'ceph-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9'})  2025-09-27 21:27:53.732978 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:53.732990 | orchestrator | 2025-09-27 21:27:53.733003 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-27 21:27:53.733015 | orchestrator | Saturday 27 September 2025 21:27:51 +0000 (0:00:00.169) 0:00:12.459 **** 2025-09-27 21:27:53.733027 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:53.733039 | orchestrator | 2025-09-27 21:27:53.733051 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-27 21:27:53.733081 | orchestrator | Saturday 27 September 2025 21:27:51 +0000 (0:00:00.134) 0:00:12.594 **** 2025-09-27 21:27:53.733093 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2ef8475-4f12-50de-ab79-c841a7bfbe3d', 'data_vg': 'ceph-c2ef8475-4f12-50de-ab79-c841a7bfbe3d'})  2025-09-27 21:27:53.733105 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9', 'data_vg': 'ceph-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9'})  2025-09-27 21:27:53.733116 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:53.733127 | orchestrator | 2025-09-27 21:27:53.733138 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-27 21:27:53.733149 | orchestrator | Saturday 27 September 2025 21:27:52 +0000 (0:00:00.384) 0:00:12.978 **** 2025-09-27 21:27:53.733160 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:53.733171 | orchestrator | 2025-09-27 21:27:53.733182 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-27 21:27:53.733193 | orchestrator | Saturday 27 September 2025 21:27:52 +0000 (0:00:00.169) 0:00:13.147 **** 2025-09-27 21:27:53.733204 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2ef8475-4f12-50de-ab79-c841a7bfbe3d', 'data_vg': 'ceph-c2ef8475-4f12-50de-ab79-c841a7bfbe3d'})  2025-09-27 21:27:53.733223 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9', 'data_vg': 'ceph-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9'})  2025-09-27 21:27:53.733235 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:53.733246 | orchestrator | 2025-09-27 21:27:53.733257 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-27 21:27:53.733268 | orchestrator | Saturday 27 September 2025 21:27:52 +0000 (0:00:00.166) 0:00:13.314 **** 2025-09-27 21:27:53.733279 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:53.733290 | orchestrator | 2025-09-27 21:27:53.733301 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-27 21:27:53.733312 | orchestrator | Saturday 27 September 2025 21:27:52 +0000 (0:00:00.138) 0:00:13.452 **** 2025-09-27 21:27:53.733323 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2ef8475-4f12-50de-ab79-c841a7bfbe3d', 'data_vg': 'ceph-c2ef8475-4f12-50de-ab79-c841a7bfbe3d'})  2025-09-27 21:27:53.733334 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9', 'data_vg': 'ceph-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9'})  2025-09-27 21:27:53.733345 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:53.733379 | orchestrator | 2025-09-27 21:27:53.733391 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-27 21:27:53.733402 | orchestrator | Saturday 27 September 2025 21:27:52 +0000 (0:00:00.187) 0:00:13.640 **** 2025-09-27 21:27:53.733414 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:27:53.733443 | orchestrator | 2025-09-27 21:27:53.733454 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-27 21:27:53.733465 | orchestrator | Saturday 27 September 2025 21:27:52 +0000 (0:00:00.153) 0:00:13.794 **** 2025-09-27 21:27:53.733482 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2ef8475-4f12-50de-ab79-c841a7bfbe3d', 'data_vg': 'ceph-c2ef8475-4f12-50de-ab79-c841a7bfbe3d'})  2025-09-27 21:27:53.733494 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9', 'data_vg': 'ceph-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9'})  2025-09-27 21:27:53.733505 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:53.733516 | orchestrator | 2025-09-27 21:27:53.733527 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-27 21:27:53.733538 | orchestrator | Saturday 27 September 2025 21:27:53 +0000 (0:00:00.161) 0:00:13.956 **** 2025-09-27 21:27:53.733550 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2ef8475-4f12-50de-ab79-c841a7bfbe3d', 'data_vg': 'ceph-c2ef8475-4f12-50de-ab79-c841a7bfbe3d'})  2025-09-27 21:27:53.733561 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9', 'data_vg': 'ceph-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9'})  2025-09-27 21:27:53.733572 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:53.733583 | orchestrator | 2025-09-27 21:27:53.733595 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-27 21:27:53.733616 | orchestrator | Saturday 27 September 2025 21:27:53 +0000 (0:00:00.171) 0:00:14.127 **** 2025-09-27 21:27:53.733628 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2ef8475-4f12-50de-ab79-c841a7bfbe3d', 'data_vg': 'ceph-c2ef8475-4f12-50de-ab79-c841a7bfbe3d'})  2025-09-27 21:27:53.733640 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9', 'data_vg': 'ceph-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9'})  2025-09-27 21:27:53.733650 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:53.733661 | orchestrator | 2025-09-27 21:27:53.733672 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-27 21:27:53.733683 | orchestrator | Saturday 27 September 2025 21:27:53 +0000 (0:00:00.167) 0:00:14.295 **** 2025-09-27 21:27:53.733694 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:53.733713 | orchestrator | 2025-09-27 21:27:53.733724 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-27 21:27:53.733735 | orchestrator | Saturday 27 September 2025 21:27:53 +0000 (0:00:00.158) 0:00:14.453 **** 2025-09-27 21:27:53.733746 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:53.733757 | orchestrator | 2025-09-27 21:27:53.733774 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-27 21:27:59.471242 | orchestrator | Saturday 27 September 2025 21:27:53 +0000 (0:00:00.136) 0:00:14.590 **** 2025-09-27 21:27:59.471331 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:59.471347 | orchestrator | 2025-09-27 21:27:59.471359 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-27 21:27:59.471371 | orchestrator | Saturday 27 September 2025 21:27:53 +0000 (0:00:00.138) 0:00:14.728 **** 2025-09-27 21:27:59.471382 | orchestrator | ok: [testbed-node-3] => { 2025-09-27 21:27:59.471394 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-27 21:27:59.471405 | orchestrator | } 2025-09-27 21:27:59.471417 | orchestrator | 2025-09-27 21:27:59.471428 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-27 21:27:59.471439 | orchestrator | Saturday 27 September 2025 21:27:54 +0000 (0:00:00.373) 0:00:15.102 **** 2025-09-27 21:27:59.471450 | orchestrator | ok: [testbed-node-3] => { 2025-09-27 21:27:59.471461 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-27 21:27:59.471472 | orchestrator | } 2025-09-27 21:27:59.471483 | orchestrator | 2025-09-27 21:27:59.471494 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-27 21:27:59.471505 | orchestrator | Saturday 27 September 2025 21:27:54 +0000 (0:00:00.152) 0:00:15.254 **** 2025-09-27 21:27:59.471516 | orchestrator | ok: [testbed-node-3] => { 2025-09-27 21:27:59.471527 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-27 21:27:59.471538 | orchestrator | } 2025-09-27 21:27:59.471550 | orchestrator | 2025-09-27 21:27:59.471561 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-27 21:27:59.471594 | orchestrator | Saturday 27 September 2025 21:27:54 +0000 (0:00:00.137) 0:00:15.392 **** 2025-09-27 21:27:59.471627 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:27:59.471646 | orchestrator | 2025-09-27 21:27:59.471664 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-27 21:27:59.471684 | orchestrator | Saturday 27 September 2025 21:27:55 +0000 (0:00:00.626) 0:00:16.019 **** 2025-09-27 21:27:59.471703 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:27:59.471721 | orchestrator | 2025-09-27 21:27:59.471732 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-27 21:27:59.471743 | orchestrator | Saturday 27 September 2025 21:27:55 +0000 (0:00:00.492) 0:00:16.511 **** 2025-09-27 21:27:59.471754 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:27:59.471765 | orchestrator | 2025-09-27 21:27:59.471776 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-27 21:27:59.471787 | orchestrator | Saturday 27 September 2025 21:27:56 +0000 (0:00:00.503) 0:00:17.015 **** 2025-09-27 21:27:59.471798 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:27:59.471810 | orchestrator | 2025-09-27 21:27:59.471823 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-27 21:27:59.471835 | orchestrator | Saturday 27 September 2025 21:27:56 +0000 (0:00:00.132) 0:00:17.148 **** 2025-09-27 21:27:59.471847 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:59.471859 | orchestrator | 2025-09-27 21:27:59.471894 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-27 21:27:59.471907 | orchestrator | Saturday 27 September 2025 21:27:56 +0000 (0:00:00.100) 0:00:17.249 **** 2025-09-27 21:27:59.471919 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:59.471930 | orchestrator | 2025-09-27 21:27:59.471942 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-27 21:27:59.471954 | orchestrator | Saturday 27 September 2025 21:27:56 +0000 (0:00:00.104) 0:00:17.353 **** 2025-09-27 21:27:59.471988 | orchestrator | ok: [testbed-node-3] => { 2025-09-27 21:27:59.472001 | orchestrator |  "vgs_report": { 2025-09-27 21:27:59.472013 | orchestrator |  "vg": [] 2025-09-27 21:27:59.472025 | orchestrator |  } 2025-09-27 21:27:59.472037 | orchestrator | } 2025-09-27 21:27:59.472049 | orchestrator | 2025-09-27 21:27:59.472062 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-27 21:27:59.472074 | orchestrator | Saturday 27 September 2025 21:27:56 +0000 (0:00:00.136) 0:00:17.490 **** 2025-09-27 21:27:59.472085 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:59.472097 | orchestrator | 2025-09-27 21:27:59.472109 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-27 21:27:59.472121 | orchestrator | Saturday 27 September 2025 21:27:56 +0000 (0:00:00.131) 0:00:17.621 **** 2025-09-27 21:27:59.472133 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:59.472145 | orchestrator | 2025-09-27 21:27:59.472157 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-27 21:27:59.472168 | orchestrator | Saturday 27 September 2025 21:27:56 +0000 (0:00:00.115) 0:00:17.737 **** 2025-09-27 21:27:59.472179 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:59.472190 | orchestrator | 2025-09-27 21:27:59.472201 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-27 21:27:59.472211 | orchestrator | Saturday 27 September 2025 21:27:57 +0000 (0:00:00.254) 0:00:17.991 **** 2025-09-27 21:27:59.472222 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:59.472233 | orchestrator | 2025-09-27 21:27:59.472244 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-27 21:27:59.472254 | orchestrator | Saturday 27 September 2025 21:27:57 +0000 (0:00:00.111) 0:00:18.102 **** 2025-09-27 21:27:59.472265 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:59.472276 | orchestrator | 2025-09-27 21:27:59.472302 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-27 21:27:59.472314 | orchestrator | Saturday 27 September 2025 21:27:57 +0000 (0:00:00.131) 0:00:18.234 **** 2025-09-27 21:27:59.472325 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:59.472336 | orchestrator | 2025-09-27 21:27:59.472347 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-27 21:27:59.472358 | orchestrator | Saturday 27 September 2025 21:27:57 +0000 (0:00:00.122) 0:00:18.356 **** 2025-09-27 21:27:59.472369 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:59.472379 | orchestrator | 2025-09-27 21:27:59.472390 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-27 21:27:59.472401 | orchestrator | Saturday 27 September 2025 21:27:57 +0000 (0:00:00.122) 0:00:18.479 **** 2025-09-27 21:27:59.472412 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:59.472422 | orchestrator | 2025-09-27 21:27:59.472433 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-27 21:27:59.472460 | orchestrator | Saturday 27 September 2025 21:27:57 +0000 (0:00:00.129) 0:00:18.608 **** 2025-09-27 21:27:59.472471 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:59.472482 | orchestrator | 2025-09-27 21:27:59.472493 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-27 21:27:59.472504 | orchestrator | Saturday 27 September 2025 21:27:57 +0000 (0:00:00.118) 0:00:18.727 **** 2025-09-27 21:27:59.472515 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:59.472526 | orchestrator | 2025-09-27 21:27:59.472536 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-27 21:27:59.472547 | orchestrator | Saturday 27 September 2025 21:27:57 +0000 (0:00:00.117) 0:00:18.845 **** 2025-09-27 21:27:59.472558 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:59.472569 | orchestrator | 2025-09-27 21:27:59.472580 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-27 21:27:59.472591 | orchestrator | Saturday 27 September 2025 21:27:58 +0000 (0:00:00.128) 0:00:18.973 **** 2025-09-27 21:27:59.472602 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:59.472612 | orchestrator | 2025-09-27 21:27:59.472631 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-27 21:27:59.472642 | orchestrator | Saturday 27 September 2025 21:27:58 +0000 (0:00:00.109) 0:00:19.083 **** 2025-09-27 21:27:59.472653 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:59.472664 | orchestrator | 2025-09-27 21:27:59.472675 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-27 21:27:59.472686 | orchestrator | Saturday 27 September 2025 21:27:58 +0000 (0:00:00.127) 0:00:19.211 **** 2025-09-27 21:27:59.472697 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:59.472707 | orchestrator | 2025-09-27 21:27:59.472718 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-27 21:27:59.472729 | orchestrator | Saturday 27 September 2025 21:27:58 +0000 (0:00:00.145) 0:00:19.356 **** 2025-09-27 21:27:59.472740 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2ef8475-4f12-50de-ab79-c841a7bfbe3d', 'data_vg': 'ceph-c2ef8475-4f12-50de-ab79-c841a7bfbe3d'})  2025-09-27 21:27:59.472753 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9', 'data_vg': 'ceph-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9'})  2025-09-27 21:27:59.472763 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:59.472774 | orchestrator | 2025-09-27 21:27:59.472785 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-27 21:27:59.472796 | orchestrator | Saturday 27 September 2025 21:27:58 +0000 (0:00:00.135) 0:00:19.492 **** 2025-09-27 21:27:59.472807 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2ef8475-4f12-50de-ab79-c841a7bfbe3d', 'data_vg': 'ceph-c2ef8475-4f12-50de-ab79-c841a7bfbe3d'})  2025-09-27 21:27:59.472818 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9', 'data_vg': 'ceph-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9'})  2025-09-27 21:27:59.472829 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:59.472840 | orchestrator | 2025-09-27 21:27:59.472851 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-27 21:27:59.472878 | orchestrator | Saturday 27 September 2025 21:27:58 +0000 (0:00:00.252) 0:00:19.745 **** 2025-09-27 21:27:59.472894 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2ef8475-4f12-50de-ab79-c841a7bfbe3d', 'data_vg': 'ceph-c2ef8475-4f12-50de-ab79-c841a7bfbe3d'})  2025-09-27 21:27:59.472906 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9', 'data_vg': 'ceph-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9'})  2025-09-27 21:27:59.472917 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:59.472928 | orchestrator | 2025-09-27 21:27:59.472939 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-27 21:27:59.472950 | orchestrator | Saturday 27 September 2025 21:27:59 +0000 (0:00:00.158) 0:00:19.903 **** 2025-09-27 21:27:59.472961 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2ef8475-4f12-50de-ab79-c841a7bfbe3d', 'data_vg': 'ceph-c2ef8475-4f12-50de-ab79-c841a7bfbe3d'})  2025-09-27 21:27:59.472972 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9', 'data_vg': 'ceph-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9'})  2025-09-27 21:27:59.472983 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:59.472994 | orchestrator | 2025-09-27 21:27:59.473005 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-27 21:27:59.473016 | orchestrator | Saturday 27 September 2025 21:27:59 +0000 (0:00:00.134) 0:00:20.037 **** 2025-09-27 21:27:59.473027 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2ef8475-4f12-50de-ab79-c841a7bfbe3d', 'data_vg': 'ceph-c2ef8475-4f12-50de-ab79-c841a7bfbe3d'})  2025-09-27 21:27:59.473038 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9', 'data_vg': 'ceph-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9'})  2025-09-27 21:27:59.473049 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:27:59.473066 | orchestrator | 2025-09-27 21:27:59.473078 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-27 21:27:59.473089 | orchestrator | Saturday 27 September 2025 21:27:59 +0000 (0:00:00.143) 0:00:20.180 **** 2025-09-27 21:27:59.473100 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2ef8475-4f12-50de-ab79-c841a7bfbe3d', 'data_vg': 'ceph-c2ef8475-4f12-50de-ab79-c841a7bfbe3d'})  2025-09-27 21:27:59.473117 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9', 'data_vg': 'ceph-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9'})  2025-09-27 21:28:04.811748 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:28:04.811844 | orchestrator | 2025-09-27 21:28:04.811892 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-27 21:28:04.811907 | orchestrator | Saturday 27 September 2025 21:27:59 +0000 (0:00:00.146) 0:00:20.327 **** 2025-09-27 21:28:04.811919 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2ef8475-4f12-50de-ab79-c841a7bfbe3d', 'data_vg': 'ceph-c2ef8475-4f12-50de-ab79-c841a7bfbe3d'})  2025-09-27 21:28:04.811933 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9', 'data_vg': 'ceph-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9'})  2025-09-27 21:28:04.811944 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:28:04.811955 | orchestrator | 2025-09-27 21:28:04.811966 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-27 21:28:04.811978 | orchestrator | Saturday 27 September 2025 21:27:59 +0000 (0:00:00.153) 0:00:20.481 **** 2025-09-27 21:28:04.811989 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2ef8475-4f12-50de-ab79-c841a7bfbe3d', 'data_vg': 'ceph-c2ef8475-4f12-50de-ab79-c841a7bfbe3d'})  2025-09-27 21:28:04.812000 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9', 'data_vg': 'ceph-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9'})  2025-09-27 21:28:04.812011 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:28:04.812023 | orchestrator | 2025-09-27 21:28:04.812034 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-27 21:28:04.812046 | orchestrator | Saturday 27 September 2025 21:27:59 +0000 (0:00:00.139) 0:00:20.620 **** 2025-09-27 21:28:04.812057 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:28:04.812069 | orchestrator | 2025-09-27 21:28:04.812080 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-27 21:28:04.812119 | orchestrator | Saturday 27 September 2025 21:28:00 +0000 (0:00:00.492) 0:00:21.112 **** 2025-09-27 21:28:04.812130 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:28:04.812141 | orchestrator | 2025-09-27 21:28:04.812152 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-27 21:28:04.812163 | orchestrator | Saturday 27 September 2025 21:28:00 +0000 (0:00:00.549) 0:00:21.662 **** 2025-09-27 21:28:04.812174 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:28:04.812185 | orchestrator | 2025-09-27 21:28:04.812196 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-27 21:28:04.812207 | orchestrator | Saturday 27 September 2025 21:28:00 +0000 (0:00:00.153) 0:00:21.816 **** 2025-09-27 21:28:04.812219 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-c2ef8475-4f12-50de-ab79-c841a7bfbe3d', 'vg_name': 'ceph-c2ef8475-4f12-50de-ab79-c841a7bfbe3d'}) 2025-09-27 21:28:04.812231 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9', 'vg_name': 'ceph-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9'}) 2025-09-27 21:28:04.812242 | orchestrator | 2025-09-27 21:28:04.812253 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-27 21:28:04.812265 | orchestrator | Saturday 27 September 2025 21:28:01 +0000 (0:00:00.175) 0:00:21.992 **** 2025-09-27 21:28:04.812276 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2ef8475-4f12-50de-ab79-c841a7bfbe3d', 'data_vg': 'ceph-c2ef8475-4f12-50de-ab79-c841a7bfbe3d'})  2025-09-27 21:28:04.812308 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9', 'data_vg': 'ceph-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9'})  2025-09-27 21:28:04.812322 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:28:04.812334 | orchestrator | 2025-09-27 21:28:04.812347 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-27 21:28:04.812359 | orchestrator | Saturday 27 September 2025 21:28:01 +0000 (0:00:00.153) 0:00:22.145 **** 2025-09-27 21:28:04.812371 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2ef8475-4f12-50de-ab79-c841a7bfbe3d', 'data_vg': 'ceph-c2ef8475-4f12-50de-ab79-c841a7bfbe3d'})  2025-09-27 21:28:04.812383 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9', 'data_vg': 'ceph-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9'})  2025-09-27 21:28:04.812396 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:28:04.812408 | orchestrator | 2025-09-27 21:28:04.812421 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-27 21:28:04.812432 | orchestrator | Saturday 27 September 2025 21:28:01 +0000 (0:00:00.344) 0:00:22.489 **** 2025-09-27 21:28:04.812443 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2ef8475-4f12-50de-ab79-c841a7bfbe3d', 'data_vg': 'ceph-c2ef8475-4f12-50de-ab79-c841a7bfbe3d'})  2025-09-27 21:28:04.812454 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9', 'data_vg': 'ceph-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9'})  2025-09-27 21:28:04.812466 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:28:04.812477 | orchestrator | 2025-09-27 21:28:04.812488 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-27 21:28:04.812498 | orchestrator | Saturday 27 September 2025 21:28:01 +0000 (0:00:00.163) 0:00:22.653 **** 2025-09-27 21:28:04.812509 | orchestrator | ok: [testbed-node-3] => { 2025-09-27 21:28:04.812520 | orchestrator |  "lvm_report": { 2025-09-27 21:28:04.812531 | orchestrator |  "lv": [ 2025-09-27 21:28:04.812542 | orchestrator |  { 2025-09-27 21:28:04.812569 | orchestrator |  "lv_name": "osd-block-c2ef8475-4f12-50de-ab79-c841a7bfbe3d", 2025-09-27 21:28:04.812582 | orchestrator |  "vg_name": "ceph-c2ef8475-4f12-50de-ab79-c841a7bfbe3d" 2025-09-27 21:28:04.812593 | orchestrator |  }, 2025-09-27 21:28:04.812604 | orchestrator |  { 2025-09-27 21:28:04.812615 | orchestrator |  "lv_name": "osd-block-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9", 2025-09-27 21:28:04.812626 | orchestrator |  "vg_name": "ceph-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9" 2025-09-27 21:28:04.812637 | orchestrator |  } 2025-09-27 21:28:04.812648 | orchestrator |  ], 2025-09-27 21:28:04.812659 | orchestrator |  "pv": [ 2025-09-27 21:28:04.812670 | orchestrator |  { 2025-09-27 21:28:04.812681 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-27 21:28:04.812692 | orchestrator |  "vg_name": "ceph-c2ef8475-4f12-50de-ab79-c841a7bfbe3d" 2025-09-27 21:28:04.812703 | orchestrator |  }, 2025-09-27 21:28:04.812714 | orchestrator |  { 2025-09-27 21:28:04.812725 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-27 21:28:04.812736 | orchestrator |  "vg_name": "ceph-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9" 2025-09-27 21:28:04.812747 | orchestrator |  } 2025-09-27 21:28:04.812757 | orchestrator |  ] 2025-09-27 21:28:04.812769 | orchestrator |  } 2025-09-27 21:28:04.812780 | orchestrator | } 2025-09-27 21:28:04.812791 | orchestrator | 2025-09-27 21:28:04.812802 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-27 21:28:04.812813 | orchestrator | 2025-09-27 21:28:04.812824 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-27 21:28:04.812836 | orchestrator | Saturday 27 September 2025 21:28:02 +0000 (0:00:00.272) 0:00:22.926 **** 2025-09-27 21:28:04.812847 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-27 21:28:04.812882 | orchestrator | 2025-09-27 21:28:04.812894 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-27 21:28:04.812905 | orchestrator | Saturday 27 September 2025 21:28:02 +0000 (0:00:00.281) 0:00:23.207 **** 2025-09-27 21:28:04.812916 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:28:04.812927 | orchestrator | 2025-09-27 21:28:04.812938 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:28:04.812949 | orchestrator | Saturday 27 September 2025 21:28:02 +0000 (0:00:00.230) 0:00:23.437 **** 2025-09-27 21:28:04.812980 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-27 21:28:04.812991 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-27 21:28:04.813003 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-27 21:28:04.813014 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-27 21:28:04.813025 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-27 21:28:04.813036 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-27 21:28:04.813047 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-27 21:28:04.813062 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-27 21:28:04.813073 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-27 21:28:04.813085 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-27 21:28:04.813096 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-27 21:28:04.813106 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-27 21:28:04.813117 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-27 21:28:04.813128 | orchestrator | 2025-09-27 21:28:04.813139 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:28:04.813150 | orchestrator | Saturday 27 September 2025 21:28:02 +0000 (0:00:00.400) 0:00:23.838 **** 2025-09-27 21:28:04.813162 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:04.813173 | orchestrator | 2025-09-27 21:28:04.813184 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:28:04.813195 | orchestrator | Saturday 27 September 2025 21:28:03 +0000 (0:00:00.198) 0:00:24.036 **** 2025-09-27 21:28:04.813206 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:04.813216 | orchestrator | 2025-09-27 21:28:04.813227 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:28:04.813239 | orchestrator | Saturday 27 September 2025 21:28:03 +0000 (0:00:00.205) 0:00:24.242 **** 2025-09-27 21:28:04.813249 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:04.813260 | orchestrator | 2025-09-27 21:28:04.813271 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:28:04.813282 | orchestrator | Saturday 27 September 2025 21:28:03 +0000 (0:00:00.201) 0:00:24.444 **** 2025-09-27 21:28:04.813293 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:04.813304 | orchestrator | 2025-09-27 21:28:04.813315 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:28:04.813326 | orchestrator | Saturday 27 September 2025 21:28:04 +0000 (0:00:00.605) 0:00:25.049 **** 2025-09-27 21:28:04.813337 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:04.813348 | orchestrator | 2025-09-27 21:28:04.813359 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:28:04.813370 | orchestrator | Saturday 27 September 2025 21:28:04 +0000 (0:00:00.213) 0:00:25.263 **** 2025-09-27 21:28:04.813381 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:04.813392 | orchestrator | 2025-09-27 21:28:04.813410 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:28:04.813421 | orchestrator | Saturday 27 September 2025 21:28:04 +0000 (0:00:00.200) 0:00:25.464 **** 2025-09-27 21:28:04.813432 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:04.813443 | orchestrator | 2025-09-27 21:28:04.813461 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:28:14.978877 | orchestrator | Saturday 27 September 2025 21:28:04 +0000 (0:00:00.204) 0:00:25.668 **** 2025-09-27 21:28:14.978999 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:14.979017 | orchestrator | 2025-09-27 21:28:14.979039 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:28:14.979052 | orchestrator | Saturday 27 September 2025 21:28:04 +0000 (0:00:00.194) 0:00:25.862 **** 2025-09-27 21:28:14.979064 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b) 2025-09-27 21:28:14.979077 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b) 2025-09-27 21:28:14.979088 | orchestrator | 2025-09-27 21:28:14.979100 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:28:14.979111 | orchestrator | Saturday 27 September 2025 21:28:05 +0000 (0:00:00.415) 0:00:26.278 **** 2025-09-27 21:28:14.979122 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_13607e9c-06d4-4fec-b04d-15514859d6a0) 2025-09-27 21:28:14.979133 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_13607e9c-06d4-4fec-b04d-15514859d6a0) 2025-09-27 21:28:14.979144 | orchestrator | 2025-09-27 21:28:14.979155 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:28:14.979166 | orchestrator | Saturday 27 September 2025 21:28:05 +0000 (0:00:00.410) 0:00:26.688 **** 2025-09-27 21:28:14.979176 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_00c7ac73-0c66-4cdd-8f79-353d0386cdac) 2025-09-27 21:28:14.979187 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_00c7ac73-0c66-4cdd-8f79-353d0386cdac) 2025-09-27 21:28:14.979198 | orchestrator | 2025-09-27 21:28:14.979209 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:28:14.979220 | orchestrator | Saturday 27 September 2025 21:28:06 +0000 (0:00:00.425) 0:00:27.113 **** 2025-09-27 21:28:14.979231 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f7aa810c-750c-432b-b053-2bc489acb9c9) 2025-09-27 21:28:14.979242 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f7aa810c-750c-432b-b053-2bc489acb9c9) 2025-09-27 21:28:14.979280 | orchestrator | 2025-09-27 21:28:14.979292 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:28:14.979303 | orchestrator | Saturday 27 September 2025 21:28:06 +0000 (0:00:00.539) 0:00:27.653 **** 2025-09-27 21:28:14.979314 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-27 21:28:14.979325 | orchestrator | 2025-09-27 21:28:14.979336 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:28:14.979347 | orchestrator | Saturday 27 September 2025 21:28:07 +0000 (0:00:00.411) 0:00:28.064 **** 2025-09-27 21:28:14.979358 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-27 21:28:14.979385 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-27 21:28:14.979398 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-27 21:28:14.979411 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-27 21:28:14.979423 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-27 21:28:14.979435 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-27 21:28:14.979447 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-27 21:28:14.979482 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-27 21:28:14.979494 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-27 21:28:14.979507 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-27 21:28:14.979519 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-27 21:28:14.979531 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-27 21:28:14.979543 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-27 21:28:14.979554 | orchestrator | 2025-09-27 21:28:14.979565 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:28:14.979575 | orchestrator | Saturday 27 September 2025 21:28:07 +0000 (0:00:00.652) 0:00:28.716 **** 2025-09-27 21:28:14.979586 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:14.979597 | orchestrator | 2025-09-27 21:28:14.979608 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:28:14.979619 | orchestrator | Saturday 27 September 2025 21:28:08 +0000 (0:00:00.234) 0:00:28.951 **** 2025-09-27 21:28:14.979630 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:14.979641 | orchestrator | 2025-09-27 21:28:14.979652 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:28:14.979663 | orchestrator | Saturday 27 September 2025 21:28:08 +0000 (0:00:00.225) 0:00:29.177 **** 2025-09-27 21:28:14.979674 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:14.979685 | orchestrator | 2025-09-27 21:28:14.979695 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:28:14.979706 | orchestrator | Saturday 27 September 2025 21:28:08 +0000 (0:00:00.217) 0:00:29.395 **** 2025-09-27 21:28:14.979717 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:14.979728 | orchestrator | 2025-09-27 21:28:14.979756 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:28:14.979768 | orchestrator | Saturday 27 September 2025 21:28:08 +0000 (0:00:00.193) 0:00:29.588 **** 2025-09-27 21:28:14.979779 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:14.979790 | orchestrator | 2025-09-27 21:28:14.979801 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:28:14.979812 | orchestrator | Saturday 27 September 2025 21:28:08 +0000 (0:00:00.184) 0:00:29.772 **** 2025-09-27 21:28:14.979823 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:14.979833 | orchestrator | 2025-09-27 21:28:14.979861 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:28:14.979873 | orchestrator | Saturday 27 September 2025 21:28:09 +0000 (0:00:00.208) 0:00:29.981 **** 2025-09-27 21:28:14.979883 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:14.979894 | orchestrator | 2025-09-27 21:28:14.979905 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:28:14.979916 | orchestrator | Saturday 27 September 2025 21:28:09 +0000 (0:00:00.213) 0:00:30.194 **** 2025-09-27 21:28:14.979927 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:14.979938 | orchestrator | 2025-09-27 21:28:14.979948 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:28:14.979959 | orchestrator | Saturday 27 September 2025 21:28:09 +0000 (0:00:00.203) 0:00:30.397 **** 2025-09-27 21:28:14.979970 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-27 21:28:14.979981 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-27 21:28:14.979992 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-27 21:28:14.980003 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-27 21:28:14.980013 | orchestrator | 2025-09-27 21:28:14.980025 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:28:14.980036 | orchestrator | Saturday 27 September 2025 21:28:10 +0000 (0:00:00.814) 0:00:31.212 **** 2025-09-27 21:28:14.980055 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:14.980066 | orchestrator | 2025-09-27 21:28:14.980077 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:28:14.980087 | orchestrator | Saturday 27 September 2025 21:28:10 +0000 (0:00:00.202) 0:00:31.414 **** 2025-09-27 21:28:14.980098 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:14.980109 | orchestrator | 2025-09-27 21:28:14.980119 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:28:14.980130 | orchestrator | Saturday 27 September 2025 21:28:10 +0000 (0:00:00.202) 0:00:31.617 **** 2025-09-27 21:28:14.980141 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:14.980151 | orchestrator | 2025-09-27 21:28:14.980162 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:28:14.980173 | orchestrator | Saturday 27 September 2025 21:28:11 +0000 (0:00:00.626) 0:00:32.243 **** 2025-09-27 21:28:14.980184 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:14.980195 | orchestrator | 2025-09-27 21:28:14.980206 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-27 21:28:14.980217 | orchestrator | Saturday 27 September 2025 21:28:11 +0000 (0:00:00.214) 0:00:32.458 **** 2025-09-27 21:28:14.980227 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:14.980238 | orchestrator | 2025-09-27 21:28:14.980249 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-27 21:28:14.980260 | orchestrator | Saturday 27 September 2025 21:28:11 +0000 (0:00:00.143) 0:00:32.602 **** 2025-09-27 21:28:14.980271 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'de74169a-f069-5642-ad17-f2f17c514bb2'}}) 2025-09-27 21:28:14.980282 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '364a105c-f104-5917-80d0-e8f8560ea5f8'}}) 2025-09-27 21:28:14.980293 | orchestrator | 2025-09-27 21:28:14.980303 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-27 21:28:14.980314 | orchestrator | Saturday 27 September 2025 21:28:11 +0000 (0:00:00.190) 0:00:32.792 **** 2025-09-27 21:28:14.980326 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-de74169a-f069-5642-ad17-f2f17c514bb2', 'data_vg': 'ceph-de74169a-f069-5642-ad17-f2f17c514bb2'}) 2025-09-27 21:28:14.980338 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-364a105c-f104-5917-80d0-e8f8560ea5f8', 'data_vg': 'ceph-364a105c-f104-5917-80d0-e8f8560ea5f8'}) 2025-09-27 21:28:14.980349 | orchestrator | 2025-09-27 21:28:14.980359 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-27 21:28:14.980370 | orchestrator | Saturday 27 September 2025 21:28:13 +0000 (0:00:01.684) 0:00:34.476 **** 2025-09-27 21:28:14.980381 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de74169a-f069-5642-ad17-f2f17c514bb2', 'data_vg': 'ceph-de74169a-f069-5642-ad17-f2f17c514bb2'})  2025-09-27 21:28:14.980394 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-364a105c-f104-5917-80d0-e8f8560ea5f8', 'data_vg': 'ceph-364a105c-f104-5917-80d0-e8f8560ea5f8'})  2025-09-27 21:28:14.980404 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:14.980415 | orchestrator | 2025-09-27 21:28:14.980426 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-27 21:28:14.980437 | orchestrator | Saturday 27 September 2025 21:28:13 +0000 (0:00:00.143) 0:00:34.619 **** 2025-09-27 21:28:14.980448 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-de74169a-f069-5642-ad17-f2f17c514bb2', 'data_vg': 'ceph-de74169a-f069-5642-ad17-f2f17c514bb2'}) 2025-09-27 21:28:14.980459 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-364a105c-f104-5917-80d0-e8f8560ea5f8', 'data_vg': 'ceph-364a105c-f104-5917-80d0-e8f8560ea5f8'}) 2025-09-27 21:28:14.980469 | orchestrator | 2025-09-27 21:28:14.980486 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-27 21:28:20.631980 | orchestrator | Saturday 27 September 2025 21:28:14 +0000 (0:00:01.210) 0:00:35.830 **** 2025-09-27 21:28:20.632118 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de74169a-f069-5642-ad17-f2f17c514bb2', 'data_vg': 'ceph-de74169a-f069-5642-ad17-f2f17c514bb2'})  2025-09-27 21:28:20.632136 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-364a105c-f104-5917-80d0-e8f8560ea5f8', 'data_vg': 'ceph-364a105c-f104-5917-80d0-e8f8560ea5f8'})  2025-09-27 21:28:20.632148 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:20.632161 | orchestrator | 2025-09-27 21:28:20.632173 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-27 21:28:20.632185 | orchestrator | Saturday 27 September 2025 21:28:15 +0000 (0:00:00.165) 0:00:35.996 **** 2025-09-27 21:28:20.632196 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:20.632207 | orchestrator | 2025-09-27 21:28:20.632218 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-27 21:28:20.632229 | orchestrator | Saturday 27 September 2025 21:28:15 +0000 (0:00:00.141) 0:00:36.137 **** 2025-09-27 21:28:20.632240 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de74169a-f069-5642-ad17-f2f17c514bb2', 'data_vg': 'ceph-de74169a-f069-5642-ad17-f2f17c514bb2'})  2025-09-27 21:28:20.632268 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-364a105c-f104-5917-80d0-e8f8560ea5f8', 'data_vg': 'ceph-364a105c-f104-5917-80d0-e8f8560ea5f8'})  2025-09-27 21:28:20.632279 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:20.632290 | orchestrator | 2025-09-27 21:28:20.632301 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-27 21:28:20.632312 | orchestrator | Saturday 27 September 2025 21:28:15 +0000 (0:00:00.162) 0:00:36.299 **** 2025-09-27 21:28:20.632323 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:20.632334 | orchestrator | 2025-09-27 21:28:20.632345 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-27 21:28:20.632356 | orchestrator | Saturday 27 September 2025 21:28:15 +0000 (0:00:00.140) 0:00:36.440 **** 2025-09-27 21:28:20.632366 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de74169a-f069-5642-ad17-f2f17c514bb2', 'data_vg': 'ceph-de74169a-f069-5642-ad17-f2f17c514bb2'})  2025-09-27 21:28:20.632378 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-364a105c-f104-5917-80d0-e8f8560ea5f8', 'data_vg': 'ceph-364a105c-f104-5917-80d0-e8f8560ea5f8'})  2025-09-27 21:28:20.632389 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:20.632400 | orchestrator | 2025-09-27 21:28:20.632411 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-27 21:28:20.632422 | orchestrator | Saturday 27 September 2025 21:28:15 +0000 (0:00:00.158) 0:00:36.598 **** 2025-09-27 21:28:20.632437 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:20.632448 | orchestrator | 2025-09-27 21:28:20.632459 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-27 21:28:20.632470 | orchestrator | Saturday 27 September 2025 21:28:16 +0000 (0:00:00.340) 0:00:36.938 **** 2025-09-27 21:28:20.632481 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de74169a-f069-5642-ad17-f2f17c514bb2', 'data_vg': 'ceph-de74169a-f069-5642-ad17-f2f17c514bb2'})  2025-09-27 21:28:20.632493 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-364a105c-f104-5917-80d0-e8f8560ea5f8', 'data_vg': 'ceph-364a105c-f104-5917-80d0-e8f8560ea5f8'})  2025-09-27 21:28:20.632505 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:20.632518 | orchestrator | 2025-09-27 21:28:20.632530 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-27 21:28:20.632542 | orchestrator | Saturday 27 September 2025 21:28:16 +0000 (0:00:00.144) 0:00:37.083 **** 2025-09-27 21:28:20.632554 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:28:20.632567 | orchestrator | 2025-09-27 21:28:20.632579 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-27 21:28:20.632591 | orchestrator | Saturday 27 September 2025 21:28:16 +0000 (0:00:00.142) 0:00:37.226 **** 2025-09-27 21:28:20.632611 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de74169a-f069-5642-ad17-f2f17c514bb2', 'data_vg': 'ceph-de74169a-f069-5642-ad17-f2f17c514bb2'})  2025-09-27 21:28:20.632624 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-364a105c-f104-5917-80d0-e8f8560ea5f8', 'data_vg': 'ceph-364a105c-f104-5917-80d0-e8f8560ea5f8'})  2025-09-27 21:28:20.632637 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:20.632648 | orchestrator | 2025-09-27 21:28:20.632661 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-27 21:28:20.632673 | orchestrator | Saturday 27 September 2025 21:28:16 +0000 (0:00:00.132) 0:00:37.358 **** 2025-09-27 21:28:20.632685 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de74169a-f069-5642-ad17-f2f17c514bb2', 'data_vg': 'ceph-de74169a-f069-5642-ad17-f2f17c514bb2'})  2025-09-27 21:28:20.632697 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-364a105c-f104-5917-80d0-e8f8560ea5f8', 'data_vg': 'ceph-364a105c-f104-5917-80d0-e8f8560ea5f8'})  2025-09-27 21:28:20.632709 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:20.632721 | orchestrator | 2025-09-27 21:28:20.632733 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-27 21:28:20.632745 | orchestrator | Saturday 27 September 2025 21:28:16 +0000 (0:00:00.145) 0:00:37.504 **** 2025-09-27 21:28:20.632774 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de74169a-f069-5642-ad17-f2f17c514bb2', 'data_vg': 'ceph-de74169a-f069-5642-ad17-f2f17c514bb2'})  2025-09-27 21:28:20.632787 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-364a105c-f104-5917-80d0-e8f8560ea5f8', 'data_vg': 'ceph-364a105c-f104-5917-80d0-e8f8560ea5f8'})  2025-09-27 21:28:20.632800 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:20.632811 | orchestrator | 2025-09-27 21:28:20.632821 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-27 21:28:20.632832 | orchestrator | Saturday 27 September 2025 21:28:16 +0000 (0:00:00.153) 0:00:37.657 **** 2025-09-27 21:28:20.632867 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:20.632879 | orchestrator | 2025-09-27 21:28:20.632890 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-27 21:28:20.632900 | orchestrator | Saturday 27 September 2025 21:28:16 +0000 (0:00:00.153) 0:00:37.811 **** 2025-09-27 21:28:20.632911 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:20.632922 | orchestrator | 2025-09-27 21:28:20.632933 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-27 21:28:20.632944 | orchestrator | Saturday 27 September 2025 21:28:17 +0000 (0:00:00.140) 0:00:37.952 **** 2025-09-27 21:28:20.632955 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:20.632966 | orchestrator | 2025-09-27 21:28:20.632976 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-27 21:28:20.632988 | orchestrator | Saturday 27 September 2025 21:28:17 +0000 (0:00:00.141) 0:00:38.093 **** 2025-09-27 21:28:20.632999 | orchestrator | ok: [testbed-node-4] => { 2025-09-27 21:28:20.633010 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-27 21:28:20.633021 | orchestrator | } 2025-09-27 21:28:20.633032 | orchestrator | 2025-09-27 21:28:20.633043 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-27 21:28:20.633054 | orchestrator | Saturday 27 September 2025 21:28:17 +0000 (0:00:00.148) 0:00:38.242 **** 2025-09-27 21:28:20.633065 | orchestrator | ok: [testbed-node-4] => { 2025-09-27 21:28:20.633075 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-27 21:28:20.633086 | orchestrator | } 2025-09-27 21:28:20.633097 | orchestrator | 2025-09-27 21:28:20.633108 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-27 21:28:20.633119 | orchestrator | Saturday 27 September 2025 21:28:17 +0000 (0:00:00.131) 0:00:38.373 **** 2025-09-27 21:28:20.633129 | orchestrator | ok: [testbed-node-4] => { 2025-09-27 21:28:20.633140 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-27 21:28:20.633158 | orchestrator | } 2025-09-27 21:28:20.633169 | orchestrator | 2025-09-27 21:28:20.633180 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-27 21:28:20.633191 | orchestrator | Saturday 27 September 2025 21:28:17 +0000 (0:00:00.158) 0:00:38.531 **** 2025-09-27 21:28:20.633202 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:28:20.633212 | orchestrator | 2025-09-27 21:28:20.633223 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-27 21:28:20.633234 | orchestrator | Saturday 27 September 2025 21:28:18 +0000 (0:00:00.744) 0:00:39.276 **** 2025-09-27 21:28:20.633250 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:28:20.633262 | orchestrator | 2025-09-27 21:28:20.633272 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-27 21:28:20.633283 | orchestrator | Saturday 27 September 2025 21:28:18 +0000 (0:00:00.551) 0:00:39.827 **** 2025-09-27 21:28:20.633294 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:28:20.633305 | orchestrator | 2025-09-27 21:28:20.633316 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-27 21:28:20.633327 | orchestrator | Saturday 27 September 2025 21:28:19 +0000 (0:00:00.544) 0:00:40.371 **** 2025-09-27 21:28:20.633338 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:28:20.633349 | orchestrator | 2025-09-27 21:28:20.633359 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-27 21:28:20.633370 | orchestrator | Saturday 27 September 2025 21:28:19 +0000 (0:00:00.159) 0:00:40.530 **** 2025-09-27 21:28:20.633381 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:20.633392 | orchestrator | 2025-09-27 21:28:20.633403 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-27 21:28:20.633414 | orchestrator | Saturday 27 September 2025 21:28:19 +0000 (0:00:00.115) 0:00:40.646 **** 2025-09-27 21:28:20.633425 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:20.633435 | orchestrator | 2025-09-27 21:28:20.633446 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-27 21:28:20.633457 | orchestrator | Saturday 27 September 2025 21:28:19 +0000 (0:00:00.112) 0:00:40.759 **** 2025-09-27 21:28:20.633468 | orchestrator | ok: [testbed-node-4] => { 2025-09-27 21:28:20.633478 | orchestrator |  "vgs_report": { 2025-09-27 21:28:20.633490 | orchestrator |  "vg": [] 2025-09-27 21:28:20.633501 | orchestrator |  } 2025-09-27 21:28:20.633512 | orchestrator | } 2025-09-27 21:28:20.633523 | orchestrator | 2025-09-27 21:28:20.633534 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-27 21:28:20.633545 | orchestrator | Saturday 27 September 2025 21:28:20 +0000 (0:00:00.153) 0:00:40.912 **** 2025-09-27 21:28:20.633556 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:20.633566 | orchestrator | 2025-09-27 21:28:20.633577 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-27 21:28:20.633588 | orchestrator | Saturday 27 September 2025 21:28:20 +0000 (0:00:00.141) 0:00:41.053 **** 2025-09-27 21:28:20.633599 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:20.633609 | orchestrator | 2025-09-27 21:28:20.633620 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-27 21:28:20.633631 | orchestrator | Saturday 27 September 2025 21:28:20 +0000 (0:00:00.144) 0:00:41.198 **** 2025-09-27 21:28:20.633642 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:20.633653 | orchestrator | 2025-09-27 21:28:20.633664 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-27 21:28:20.633675 | orchestrator | Saturday 27 September 2025 21:28:20 +0000 (0:00:00.139) 0:00:41.338 **** 2025-09-27 21:28:20.633686 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:20.633697 | orchestrator | 2025-09-27 21:28:20.633708 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-27 21:28:20.633725 | orchestrator | Saturday 27 September 2025 21:28:20 +0000 (0:00:00.149) 0:00:41.488 **** 2025-09-27 21:28:25.322529 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:25.322613 | orchestrator | 2025-09-27 21:28:25.322650 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-27 21:28:25.322664 | orchestrator | Saturday 27 September 2025 21:28:20 +0000 (0:00:00.142) 0:00:41.630 **** 2025-09-27 21:28:25.322675 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:25.322686 | orchestrator | 2025-09-27 21:28:25.322697 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-27 21:28:25.322708 | orchestrator | Saturday 27 September 2025 21:28:21 +0000 (0:00:00.319) 0:00:41.950 **** 2025-09-27 21:28:25.322718 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:25.322729 | orchestrator | 2025-09-27 21:28:25.322740 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-27 21:28:25.322751 | orchestrator | Saturday 27 September 2025 21:28:21 +0000 (0:00:00.141) 0:00:42.091 **** 2025-09-27 21:28:25.322762 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:25.322773 | orchestrator | 2025-09-27 21:28:25.322784 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-27 21:28:25.322794 | orchestrator | Saturday 27 September 2025 21:28:21 +0000 (0:00:00.139) 0:00:42.231 **** 2025-09-27 21:28:25.322805 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:25.322816 | orchestrator | 2025-09-27 21:28:25.322827 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-27 21:28:25.322869 | orchestrator | Saturday 27 September 2025 21:28:21 +0000 (0:00:00.131) 0:00:42.363 **** 2025-09-27 21:28:25.322881 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:25.322892 | orchestrator | 2025-09-27 21:28:25.322903 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-27 21:28:25.322914 | orchestrator | Saturday 27 September 2025 21:28:21 +0000 (0:00:00.137) 0:00:42.501 **** 2025-09-27 21:28:25.322924 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:25.322935 | orchestrator | 2025-09-27 21:28:25.322946 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-27 21:28:25.322984 | orchestrator | Saturday 27 September 2025 21:28:21 +0000 (0:00:00.152) 0:00:42.653 **** 2025-09-27 21:28:25.322995 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:25.323006 | orchestrator | 2025-09-27 21:28:25.323017 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-27 21:28:25.323028 | orchestrator | Saturday 27 September 2025 21:28:21 +0000 (0:00:00.137) 0:00:42.791 **** 2025-09-27 21:28:25.323039 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:25.323050 | orchestrator | 2025-09-27 21:28:25.323061 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-27 21:28:25.323072 | orchestrator | Saturday 27 September 2025 21:28:22 +0000 (0:00:00.137) 0:00:42.928 **** 2025-09-27 21:28:25.323082 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:25.323093 | orchestrator | 2025-09-27 21:28:25.323107 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-27 21:28:25.323119 | orchestrator | Saturday 27 September 2025 21:28:22 +0000 (0:00:00.139) 0:00:43.068 **** 2025-09-27 21:28:25.323141 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de74169a-f069-5642-ad17-f2f17c514bb2', 'data_vg': 'ceph-de74169a-f069-5642-ad17-f2f17c514bb2'})  2025-09-27 21:28:25.323156 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-364a105c-f104-5917-80d0-e8f8560ea5f8', 'data_vg': 'ceph-364a105c-f104-5917-80d0-e8f8560ea5f8'})  2025-09-27 21:28:25.323169 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:25.323180 | orchestrator | 2025-09-27 21:28:25.323193 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-27 21:28:25.323205 | orchestrator | Saturday 27 September 2025 21:28:22 +0000 (0:00:00.154) 0:00:43.223 **** 2025-09-27 21:28:25.323217 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de74169a-f069-5642-ad17-f2f17c514bb2', 'data_vg': 'ceph-de74169a-f069-5642-ad17-f2f17c514bb2'})  2025-09-27 21:28:25.323229 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-364a105c-f104-5917-80d0-e8f8560ea5f8', 'data_vg': 'ceph-364a105c-f104-5917-80d0-e8f8560ea5f8'})  2025-09-27 21:28:25.323249 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:25.323261 | orchestrator | 2025-09-27 21:28:25.323273 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-27 21:28:25.323285 | orchestrator | Saturday 27 September 2025 21:28:22 +0000 (0:00:00.185) 0:00:43.408 **** 2025-09-27 21:28:25.323298 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de74169a-f069-5642-ad17-f2f17c514bb2', 'data_vg': 'ceph-de74169a-f069-5642-ad17-f2f17c514bb2'})  2025-09-27 21:28:25.323310 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-364a105c-f104-5917-80d0-e8f8560ea5f8', 'data_vg': 'ceph-364a105c-f104-5917-80d0-e8f8560ea5f8'})  2025-09-27 21:28:25.323321 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:25.323333 | orchestrator | 2025-09-27 21:28:25.323345 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-27 21:28:25.323357 | orchestrator | Saturday 27 September 2025 21:28:22 +0000 (0:00:00.151) 0:00:43.560 **** 2025-09-27 21:28:25.323369 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de74169a-f069-5642-ad17-f2f17c514bb2', 'data_vg': 'ceph-de74169a-f069-5642-ad17-f2f17c514bb2'})  2025-09-27 21:28:25.323382 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-364a105c-f104-5917-80d0-e8f8560ea5f8', 'data_vg': 'ceph-364a105c-f104-5917-80d0-e8f8560ea5f8'})  2025-09-27 21:28:25.323394 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:25.323405 | orchestrator | 2025-09-27 21:28:25.323417 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-27 21:28:25.323444 | orchestrator | Saturday 27 September 2025 21:28:23 +0000 (0:00:00.371) 0:00:43.931 **** 2025-09-27 21:28:25.323457 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de74169a-f069-5642-ad17-f2f17c514bb2', 'data_vg': 'ceph-de74169a-f069-5642-ad17-f2f17c514bb2'})  2025-09-27 21:28:25.323470 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-364a105c-f104-5917-80d0-e8f8560ea5f8', 'data_vg': 'ceph-364a105c-f104-5917-80d0-e8f8560ea5f8'})  2025-09-27 21:28:25.323481 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:25.323492 | orchestrator | 2025-09-27 21:28:25.323503 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-27 21:28:25.323513 | orchestrator | Saturday 27 September 2025 21:28:23 +0000 (0:00:00.153) 0:00:44.085 **** 2025-09-27 21:28:25.323524 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de74169a-f069-5642-ad17-f2f17c514bb2', 'data_vg': 'ceph-de74169a-f069-5642-ad17-f2f17c514bb2'})  2025-09-27 21:28:25.323535 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-364a105c-f104-5917-80d0-e8f8560ea5f8', 'data_vg': 'ceph-364a105c-f104-5917-80d0-e8f8560ea5f8'})  2025-09-27 21:28:25.323546 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:25.323557 | orchestrator | 2025-09-27 21:28:25.323568 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-27 21:28:25.323579 | orchestrator | Saturday 27 September 2025 21:28:23 +0000 (0:00:00.159) 0:00:44.245 **** 2025-09-27 21:28:25.323590 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de74169a-f069-5642-ad17-f2f17c514bb2', 'data_vg': 'ceph-de74169a-f069-5642-ad17-f2f17c514bb2'})  2025-09-27 21:28:25.323601 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-364a105c-f104-5917-80d0-e8f8560ea5f8', 'data_vg': 'ceph-364a105c-f104-5917-80d0-e8f8560ea5f8'})  2025-09-27 21:28:25.323612 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:25.323622 | orchestrator | 2025-09-27 21:28:25.323633 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-27 21:28:25.323644 | orchestrator | Saturday 27 September 2025 21:28:23 +0000 (0:00:00.165) 0:00:44.411 **** 2025-09-27 21:28:25.323655 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de74169a-f069-5642-ad17-f2f17c514bb2', 'data_vg': 'ceph-de74169a-f069-5642-ad17-f2f17c514bb2'})  2025-09-27 21:28:25.323672 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-364a105c-f104-5917-80d0-e8f8560ea5f8', 'data_vg': 'ceph-364a105c-f104-5917-80d0-e8f8560ea5f8'})  2025-09-27 21:28:25.323683 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:25.323694 | orchestrator | 2025-09-27 21:28:25.323705 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-27 21:28:25.323753 | orchestrator | Saturday 27 September 2025 21:28:23 +0000 (0:00:00.149) 0:00:44.561 **** 2025-09-27 21:28:25.323765 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:28:25.323777 | orchestrator | 2025-09-27 21:28:25.323787 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-27 21:28:25.323798 | orchestrator | Saturday 27 September 2025 21:28:24 +0000 (0:00:00.557) 0:00:45.118 **** 2025-09-27 21:28:25.323809 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:28:25.323820 | orchestrator | 2025-09-27 21:28:25.323831 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-27 21:28:25.323869 | orchestrator | Saturday 27 September 2025 21:28:24 +0000 (0:00:00.468) 0:00:45.587 **** 2025-09-27 21:28:25.323888 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:28:25.323905 | orchestrator | 2025-09-27 21:28:25.323917 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-27 21:28:25.323928 | orchestrator | Saturday 27 September 2025 21:28:24 +0000 (0:00:00.136) 0:00:45.723 **** 2025-09-27 21:28:25.323938 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-364a105c-f104-5917-80d0-e8f8560ea5f8', 'vg_name': 'ceph-364a105c-f104-5917-80d0-e8f8560ea5f8'}) 2025-09-27 21:28:25.323950 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-de74169a-f069-5642-ad17-f2f17c514bb2', 'vg_name': 'ceph-de74169a-f069-5642-ad17-f2f17c514bb2'}) 2025-09-27 21:28:25.323961 | orchestrator | 2025-09-27 21:28:25.323971 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-27 21:28:25.323982 | orchestrator | Saturday 27 September 2025 21:28:25 +0000 (0:00:00.161) 0:00:45.884 **** 2025-09-27 21:28:25.323993 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de74169a-f069-5642-ad17-f2f17c514bb2', 'data_vg': 'ceph-de74169a-f069-5642-ad17-f2f17c514bb2'})  2025-09-27 21:28:25.324004 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-364a105c-f104-5917-80d0-e8f8560ea5f8', 'data_vg': 'ceph-364a105c-f104-5917-80d0-e8f8560ea5f8'})  2025-09-27 21:28:25.324015 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:25.324026 | orchestrator | 2025-09-27 21:28:25.324036 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-27 21:28:25.324047 | orchestrator | Saturday 27 September 2025 21:28:25 +0000 (0:00:00.147) 0:00:46.031 **** 2025-09-27 21:28:25.324058 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de74169a-f069-5642-ad17-f2f17c514bb2', 'data_vg': 'ceph-de74169a-f069-5642-ad17-f2f17c514bb2'})  2025-09-27 21:28:25.324069 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-364a105c-f104-5917-80d0-e8f8560ea5f8', 'data_vg': 'ceph-364a105c-f104-5917-80d0-e8f8560ea5f8'})  2025-09-27 21:28:25.324088 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:30.697923 | orchestrator | 2025-09-27 21:28:30.698071 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-27 21:28:30.698709 | orchestrator | Saturday 27 September 2025 21:28:25 +0000 (0:00:00.148) 0:00:46.179 **** 2025-09-27 21:28:30.698731 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-de74169a-f069-5642-ad17-f2f17c514bb2', 'data_vg': 'ceph-de74169a-f069-5642-ad17-f2f17c514bb2'})  2025-09-27 21:28:30.698744 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-364a105c-f104-5917-80d0-e8f8560ea5f8', 'data_vg': 'ceph-364a105c-f104-5917-80d0-e8f8560ea5f8'})  2025-09-27 21:28:30.698757 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:28:30.698771 | orchestrator | 2025-09-27 21:28:30.698784 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-27 21:28:30.698804 | orchestrator | Saturday 27 September 2025 21:28:25 +0000 (0:00:00.139) 0:00:46.319 **** 2025-09-27 21:28:30.698865 | orchestrator | ok: [testbed-node-4] => { 2025-09-27 21:28:30.698878 | orchestrator |  "lvm_report": { 2025-09-27 21:28:30.698892 | orchestrator |  "lv": [ 2025-09-27 21:28:30.698910 | orchestrator |  { 2025-09-27 21:28:30.698921 | orchestrator |  "lv_name": "osd-block-364a105c-f104-5917-80d0-e8f8560ea5f8", 2025-09-27 21:28:30.698932 | orchestrator |  "vg_name": "ceph-364a105c-f104-5917-80d0-e8f8560ea5f8" 2025-09-27 21:28:30.698944 | orchestrator |  }, 2025-09-27 21:28:30.698954 | orchestrator |  { 2025-09-27 21:28:30.698965 | orchestrator |  "lv_name": "osd-block-de74169a-f069-5642-ad17-f2f17c514bb2", 2025-09-27 21:28:30.698975 | orchestrator |  "vg_name": "ceph-de74169a-f069-5642-ad17-f2f17c514bb2" 2025-09-27 21:28:30.698986 | orchestrator |  } 2025-09-27 21:28:30.698996 | orchestrator |  ], 2025-09-27 21:28:30.699007 | orchestrator |  "pv": [ 2025-09-27 21:28:30.699018 | orchestrator |  { 2025-09-27 21:28:30.699028 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-27 21:28:30.699039 | orchestrator |  "vg_name": "ceph-de74169a-f069-5642-ad17-f2f17c514bb2" 2025-09-27 21:28:30.699050 | orchestrator |  }, 2025-09-27 21:28:30.699060 | orchestrator |  { 2025-09-27 21:28:30.699071 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-27 21:28:30.699082 | orchestrator |  "vg_name": "ceph-364a105c-f104-5917-80d0-e8f8560ea5f8" 2025-09-27 21:28:30.699093 | orchestrator |  } 2025-09-27 21:28:30.699103 | orchestrator |  ] 2025-09-27 21:28:30.699114 | orchestrator |  } 2025-09-27 21:28:30.699124 | orchestrator | } 2025-09-27 21:28:30.699135 | orchestrator | 2025-09-27 21:28:30.699146 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-27 21:28:30.699157 | orchestrator | 2025-09-27 21:28:30.699167 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-27 21:28:30.699178 | orchestrator | Saturday 27 September 2025 21:28:25 +0000 (0:00:00.376) 0:00:46.696 **** 2025-09-27 21:28:30.699189 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-27 21:28:30.699200 | orchestrator | 2025-09-27 21:28:30.699228 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-27 21:28:30.699241 | orchestrator | Saturday 27 September 2025 21:28:26 +0000 (0:00:00.222) 0:00:46.918 **** 2025-09-27 21:28:30.699252 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:28:30.699263 | orchestrator | 2025-09-27 21:28:30.699276 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:28:30.699294 | orchestrator | Saturday 27 September 2025 21:28:26 +0000 (0:00:00.206) 0:00:47.125 **** 2025-09-27 21:28:30.699305 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-27 21:28:30.699316 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-27 21:28:30.699327 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-27 21:28:30.699344 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-27 21:28:30.699357 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-27 21:28:30.699368 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-27 21:28:30.699378 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-27 21:28:30.699389 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-27 21:28:30.699400 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-27 21:28:30.699410 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-27 21:28:30.699421 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-27 21:28:30.699440 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-27 21:28:30.699450 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-27 21:28:30.699461 | orchestrator | 2025-09-27 21:28:30.699472 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:28:30.699483 | orchestrator | Saturday 27 September 2025 21:28:26 +0000 (0:00:00.370) 0:00:47.496 **** 2025-09-27 21:28:30.699494 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:30.699508 | orchestrator | 2025-09-27 21:28:30.699519 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:28:30.699530 | orchestrator | Saturday 27 September 2025 21:28:26 +0000 (0:00:00.198) 0:00:47.694 **** 2025-09-27 21:28:30.699541 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:30.699552 | orchestrator | 2025-09-27 21:28:30.699563 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:28:30.699593 | orchestrator | Saturday 27 September 2025 21:28:27 +0000 (0:00:00.182) 0:00:47.877 **** 2025-09-27 21:28:30.699605 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:30.699615 | orchestrator | 2025-09-27 21:28:30.699626 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:28:30.699638 | orchestrator | Saturday 27 September 2025 21:28:27 +0000 (0:00:00.191) 0:00:48.068 **** 2025-09-27 21:28:30.699649 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:30.699659 | orchestrator | 2025-09-27 21:28:30.699670 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:28:30.699681 | orchestrator | Saturday 27 September 2025 21:28:27 +0000 (0:00:00.191) 0:00:48.260 **** 2025-09-27 21:28:30.699692 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:30.699703 | orchestrator | 2025-09-27 21:28:30.699714 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:28:30.699725 | orchestrator | Saturday 27 September 2025 21:28:27 +0000 (0:00:00.183) 0:00:48.444 **** 2025-09-27 21:28:30.699736 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:30.699747 | orchestrator | 2025-09-27 21:28:30.699760 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:28:30.699777 | orchestrator | Saturday 27 September 2025 21:28:28 +0000 (0:00:00.426) 0:00:48.870 **** 2025-09-27 21:28:30.699789 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:30.699799 | orchestrator | 2025-09-27 21:28:30.699810 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:28:30.699821 | orchestrator | Saturday 27 September 2025 21:28:28 +0000 (0:00:00.180) 0:00:49.051 **** 2025-09-27 21:28:30.699859 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:30.699871 | orchestrator | 2025-09-27 21:28:30.699882 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:28:30.699893 | orchestrator | Saturday 27 September 2025 21:28:28 +0000 (0:00:00.159) 0:00:49.211 **** 2025-09-27 21:28:30.699904 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f) 2025-09-27 21:28:30.699916 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f) 2025-09-27 21:28:30.699927 | orchestrator | 2025-09-27 21:28:30.699938 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:28:30.699949 | orchestrator | Saturday 27 September 2025 21:28:28 +0000 (0:00:00.407) 0:00:49.618 **** 2025-09-27 21:28:30.699960 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3ec8be80-0eed-4819-876a-b80c0ef8150e) 2025-09-27 21:28:30.699971 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3ec8be80-0eed-4819-876a-b80c0ef8150e) 2025-09-27 21:28:30.699982 | orchestrator | 2025-09-27 21:28:30.699992 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:28:30.700003 | orchestrator | Saturday 27 September 2025 21:28:29 +0000 (0:00:00.397) 0:00:50.017 **** 2025-09-27 21:28:30.700026 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_89df2119-9fed-4bd7-9779-2bc26187d4ad) 2025-09-27 21:28:30.700038 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_89df2119-9fed-4bd7-9779-2bc26187d4ad) 2025-09-27 21:28:30.700049 | orchestrator | 2025-09-27 21:28:30.700060 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:28:30.700071 | orchestrator | Saturday 27 September 2025 21:28:29 +0000 (0:00:00.423) 0:00:50.440 **** 2025-09-27 21:28:30.700082 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fb7d096e-2368-48a2-bece-3fcee17790fa) 2025-09-27 21:28:30.700093 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fb7d096e-2368-48a2-bece-3fcee17790fa) 2025-09-27 21:28:30.700104 | orchestrator | 2025-09-27 21:28:30.700115 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-27 21:28:30.700125 | orchestrator | Saturday 27 September 2025 21:28:29 +0000 (0:00:00.399) 0:00:50.840 **** 2025-09-27 21:28:30.700136 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-27 21:28:30.700147 | orchestrator | 2025-09-27 21:28:30.700158 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:28:30.700169 | orchestrator | Saturday 27 September 2025 21:28:30 +0000 (0:00:00.313) 0:00:51.153 **** 2025-09-27 21:28:30.700180 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-27 21:28:30.700191 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-27 21:28:30.700201 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-27 21:28:30.700212 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-27 21:28:30.700223 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-27 21:28:30.700234 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-27 21:28:30.700244 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-27 21:28:30.700255 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-27 21:28:30.700266 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-27 21:28:30.700277 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-27 21:28:30.700288 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-27 21:28:30.700306 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-27 21:28:39.612554 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-27 21:28:39.612631 | orchestrator | 2025-09-27 21:28:39.612637 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:28:39.612642 | orchestrator | Saturday 27 September 2025 21:28:30 +0000 (0:00:00.394) 0:00:51.547 **** 2025-09-27 21:28:39.612647 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:39.612652 | orchestrator | 2025-09-27 21:28:39.612657 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:28:39.612661 | orchestrator | Saturday 27 September 2025 21:28:30 +0000 (0:00:00.186) 0:00:51.734 **** 2025-09-27 21:28:39.612665 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:39.612669 | orchestrator | 2025-09-27 21:28:39.612673 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:28:39.612677 | orchestrator | Saturday 27 September 2025 21:28:31 +0000 (0:00:00.201) 0:00:51.935 **** 2025-09-27 21:28:39.612680 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:39.612684 | orchestrator | 2025-09-27 21:28:39.612688 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:28:39.612705 | orchestrator | Saturday 27 September 2025 21:28:31 +0000 (0:00:00.569) 0:00:52.505 **** 2025-09-27 21:28:39.612709 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:39.612713 | orchestrator | 2025-09-27 21:28:39.612717 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:28:39.612721 | orchestrator | Saturday 27 September 2025 21:28:31 +0000 (0:00:00.195) 0:00:52.701 **** 2025-09-27 21:28:39.612724 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:39.612728 | orchestrator | 2025-09-27 21:28:39.612732 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:28:39.612736 | orchestrator | Saturday 27 September 2025 21:28:32 +0000 (0:00:00.226) 0:00:52.927 **** 2025-09-27 21:28:39.612739 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:39.612743 | orchestrator | 2025-09-27 21:28:39.612747 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:28:39.612751 | orchestrator | Saturday 27 September 2025 21:28:32 +0000 (0:00:00.198) 0:00:53.125 **** 2025-09-27 21:28:39.612754 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:39.612758 | orchestrator | 2025-09-27 21:28:39.612762 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:28:39.612766 | orchestrator | Saturday 27 September 2025 21:28:32 +0000 (0:00:00.194) 0:00:53.319 **** 2025-09-27 21:28:39.612770 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:39.612773 | orchestrator | 2025-09-27 21:28:39.612777 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:28:39.612781 | orchestrator | Saturday 27 September 2025 21:28:32 +0000 (0:00:00.200) 0:00:53.520 **** 2025-09-27 21:28:39.612785 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-27 21:28:39.612790 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-27 21:28:39.612794 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-27 21:28:39.612798 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-27 21:28:39.612802 | orchestrator | 2025-09-27 21:28:39.612806 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:28:39.612809 | orchestrator | Saturday 27 September 2025 21:28:33 +0000 (0:00:00.634) 0:00:54.154 **** 2025-09-27 21:28:39.612813 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:39.612830 | orchestrator | 2025-09-27 21:28:39.612834 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:28:39.612838 | orchestrator | Saturday 27 September 2025 21:28:33 +0000 (0:00:00.225) 0:00:54.380 **** 2025-09-27 21:28:39.612842 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:39.612846 | orchestrator | 2025-09-27 21:28:39.612850 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:28:39.612854 | orchestrator | Saturday 27 September 2025 21:28:33 +0000 (0:00:00.198) 0:00:54.578 **** 2025-09-27 21:28:39.612858 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:39.612862 | orchestrator | 2025-09-27 21:28:39.612865 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-27 21:28:39.612869 | orchestrator | Saturday 27 September 2025 21:28:33 +0000 (0:00:00.196) 0:00:54.775 **** 2025-09-27 21:28:39.612873 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:39.612877 | orchestrator | 2025-09-27 21:28:39.612880 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-27 21:28:39.612884 | orchestrator | Saturday 27 September 2025 21:28:34 +0000 (0:00:00.213) 0:00:54.989 **** 2025-09-27 21:28:39.612888 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:39.612892 | orchestrator | 2025-09-27 21:28:39.612895 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-27 21:28:39.612899 | orchestrator | Saturday 27 September 2025 21:28:34 +0000 (0:00:00.324) 0:00:55.314 **** 2025-09-27 21:28:39.612903 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5f61d8e2-65b7-57ca-8dcb-2a964e525246'}}) 2025-09-27 21:28:39.612907 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2897d5b9-8afd-5dc0-8795-bd1d3af2960f'}}) 2025-09-27 21:28:39.612914 | orchestrator | 2025-09-27 21:28:39.612918 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-27 21:28:39.612922 | orchestrator | Saturday 27 September 2025 21:28:34 +0000 (0:00:00.198) 0:00:55.513 **** 2025-09-27 21:28:39.612926 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5f61d8e2-65b7-57ca-8dcb-2a964e525246', 'data_vg': 'ceph-5f61d8e2-65b7-57ca-8dcb-2a964e525246'}) 2025-09-27 21:28:39.612931 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2897d5b9-8afd-5dc0-8795-bd1d3af2960f', 'data_vg': 'ceph-2897d5b9-8afd-5dc0-8795-bd1d3af2960f'}) 2025-09-27 21:28:39.612935 | orchestrator | 2025-09-27 21:28:39.612939 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-27 21:28:39.612952 | orchestrator | Saturday 27 September 2025 21:28:36 +0000 (0:00:01.868) 0:00:57.381 **** 2025-09-27 21:28:39.612956 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f61d8e2-65b7-57ca-8dcb-2a964e525246', 'data_vg': 'ceph-5f61d8e2-65b7-57ca-8dcb-2a964e525246'})  2025-09-27 21:28:39.612961 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2897d5b9-8afd-5dc0-8795-bd1d3af2960f', 'data_vg': 'ceph-2897d5b9-8afd-5dc0-8795-bd1d3af2960f'})  2025-09-27 21:28:39.612965 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:39.612969 | orchestrator | 2025-09-27 21:28:39.612972 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-27 21:28:39.612976 | orchestrator | Saturday 27 September 2025 21:28:36 +0000 (0:00:00.162) 0:00:57.544 **** 2025-09-27 21:28:39.612980 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5f61d8e2-65b7-57ca-8dcb-2a964e525246', 'data_vg': 'ceph-5f61d8e2-65b7-57ca-8dcb-2a964e525246'}) 2025-09-27 21:28:39.612993 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2897d5b9-8afd-5dc0-8795-bd1d3af2960f', 'data_vg': 'ceph-2897d5b9-8afd-5dc0-8795-bd1d3af2960f'}) 2025-09-27 21:28:39.612997 | orchestrator | 2025-09-27 21:28:39.613001 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-27 21:28:39.613005 | orchestrator | Saturday 27 September 2025 21:28:38 +0000 (0:00:01.352) 0:00:58.896 **** 2025-09-27 21:28:39.613009 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f61d8e2-65b7-57ca-8dcb-2a964e525246', 'data_vg': 'ceph-5f61d8e2-65b7-57ca-8dcb-2a964e525246'})  2025-09-27 21:28:39.613012 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2897d5b9-8afd-5dc0-8795-bd1d3af2960f', 'data_vg': 'ceph-2897d5b9-8afd-5dc0-8795-bd1d3af2960f'})  2025-09-27 21:28:39.613016 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:39.613020 | orchestrator | 2025-09-27 21:28:39.613024 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-27 21:28:39.613028 | orchestrator | Saturday 27 September 2025 21:28:38 +0000 (0:00:00.159) 0:00:59.056 **** 2025-09-27 21:28:39.613031 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:39.613035 | orchestrator | 2025-09-27 21:28:39.613039 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-27 21:28:39.613043 | orchestrator | Saturday 27 September 2025 21:28:38 +0000 (0:00:00.145) 0:00:59.202 **** 2025-09-27 21:28:39.613046 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f61d8e2-65b7-57ca-8dcb-2a964e525246', 'data_vg': 'ceph-5f61d8e2-65b7-57ca-8dcb-2a964e525246'})  2025-09-27 21:28:39.613053 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2897d5b9-8afd-5dc0-8795-bd1d3af2960f', 'data_vg': 'ceph-2897d5b9-8afd-5dc0-8795-bd1d3af2960f'})  2025-09-27 21:28:39.613056 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:39.613060 | orchestrator | 2025-09-27 21:28:39.613064 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-27 21:28:39.613068 | orchestrator | Saturday 27 September 2025 21:28:38 +0000 (0:00:00.153) 0:00:59.356 **** 2025-09-27 21:28:39.613072 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:39.613078 | orchestrator | 2025-09-27 21:28:39.613082 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-27 21:28:39.613086 | orchestrator | Saturday 27 September 2025 21:28:38 +0000 (0:00:00.130) 0:00:59.486 **** 2025-09-27 21:28:39.613090 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f61d8e2-65b7-57ca-8dcb-2a964e525246', 'data_vg': 'ceph-5f61d8e2-65b7-57ca-8dcb-2a964e525246'})  2025-09-27 21:28:39.613094 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2897d5b9-8afd-5dc0-8795-bd1d3af2960f', 'data_vg': 'ceph-2897d5b9-8afd-5dc0-8795-bd1d3af2960f'})  2025-09-27 21:28:39.613097 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:39.613101 | orchestrator | 2025-09-27 21:28:39.613105 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-27 21:28:39.613108 | orchestrator | Saturday 27 September 2025 21:28:38 +0000 (0:00:00.169) 0:00:59.655 **** 2025-09-27 21:28:39.613112 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:39.613116 | orchestrator | 2025-09-27 21:28:39.613120 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-27 21:28:39.613123 | orchestrator | Saturday 27 September 2025 21:28:38 +0000 (0:00:00.137) 0:00:59.793 **** 2025-09-27 21:28:39.613127 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f61d8e2-65b7-57ca-8dcb-2a964e525246', 'data_vg': 'ceph-5f61d8e2-65b7-57ca-8dcb-2a964e525246'})  2025-09-27 21:28:39.613132 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2897d5b9-8afd-5dc0-8795-bd1d3af2960f', 'data_vg': 'ceph-2897d5b9-8afd-5dc0-8795-bd1d3af2960f'})  2025-09-27 21:28:39.613136 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:39.613140 | orchestrator | 2025-09-27 21:28:39.613144 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-27 21:28:39.613148 | orchestrator | Saturday 27 September 2025 21:28:39 +0000 (0:00:00.155) 0:00:59.949 **** 2025-09-27 21:28:39.613152 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:28:39.613157 | orchestrator | 2025-09-27 21:28:39.613161 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-27 21:28:39.613165 | orchestrator | Saturday 27 September 2025 21:28:39 +0000 (0:00:00.144) 0:01:00.093 **** 2025-09-27 21:28:39.613173 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f61d8e2-65b7-57ca-8dcb-2a964e525246', 'data_vg': 'ceph-5f61d8e2-65b7-57ca-8dcb-2a964e525246'})  2025-09-27 21:28:45.735774 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2897d5b9-8afd-5dc0-8795-bd1d3af2960f', 'data_vg': 'ceph-2897d5b9-8afd-5dc0-8795-bd1d3af2960f'})  2025-09-27 21:28:45.735905 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:45.735923 | orchestrator | 2025-09-27 21:28:45.735935 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-27 21:28:45.735948 | orchestrator | Saturday 27 September 2025 21:28:39 +0000 (0:00:00.377) 0:01:00.470 **** 2025-09-27 21:28:45.735959 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f61d8e2-65b7-57ca-8dcb-2a964e525246', 'data_vg': 'ceph-5f61d8e2-65b7-57ca-8dcb-2a964e525246'})  2025-09-27 21:28:45.735971 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2897d5b9-8afd-5dc0-8795-bd1d3af2960f', 'data_vg': 'ceph-2897d5b9-8afd-5dc0-8795-bd1d3af2960f'})  2025-09-27 21:28:45.735982 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:45.735993 | orchestrator | 2025-09-27 21:28:45.736005 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-27 21:28:45.736016 | orchestrator | Saturday 27 September 2025 21:28:39 +0000 (0:00:00.150) 0:01:00.621 **** 2025-09-27 21:28:45.736027 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f61d8e2-65b7-57ca-8dcb-2a964e525246', 'data_vg': 'ceph-5f61d8e2-65b7-57ca-8dcb-2a964e525246'})  2025-09-27 21:28:45.736038 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2897d5b9-8afd-5dc0-8795-bd1d3af2960f', 'data_vg': 'ceph-2897d5b9-8afd-5dc0-8795-bd1d3af2960f'})  2025-09-27 21:28:45.736049 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:45.736081 | orchestrator | 2025-09-27 21:28:45.736093 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-27 21:28:45.736104 | orchestrator | Saturday 27 September 2025 21:28:39 +0000 (0:00:00.168) 0:01:00.789 **** 2025-09-27 21:28:45.736115 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:45.736126 | orchestrator | 2025-09-27 21:28:45.736137 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-27 21:28:45.736148 | orchestrator | Saturday 27 September 2025 21:28:40 +0000 (0:00:00.148) 0:01:00.938 **** 2025-09-27 21:28:45.736158 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:45.736169 | orchestrator | 2025-09-27 21:28:45.736180 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-27 21:28:45.736191 | orchestrator | Saturday 27 September 2025 21:28:40 +0000 (0:00:00.147) 0:01:01.085 **** 2025-09-27 21:28:45.736201 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:45.736212 | orchestrator | 2025-09-27 21:28:45.736223 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-27 21:28:45.736243 | orchestrator | Saturday 27 September 2025 21:28:40 +0000 (0:00:00.136) 0:01:01.222 **** 2025-09-27 21:28:45.736254 | orchestrator | ok: [testbed-node-5] => { 2025-09-27 21:28:45.736266 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-27 21:28:45.736277 | orchestrator | } 2025-09-27 21:28:45.736288 | orchestrator | 2025-09-27 21:28:45.736299 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-27 21:28:45.736310 | orchestrator | Saturday 27 September 2025 21:28:40 +0000 (0:00:00.142) 0:01:01.365 **** 2025-09-27 21:28:45.736320 | orchestrator | ok: [testbed-node-5] => { 2025-09-27 21:28:45.736331 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-27 21:28:45.736343 | orchestrator | } 2025-09-27 21:28:45.736355 | orchestrator | 2025-09-27 21:28:45.736366 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-27 21:28:45.736379 | orchestrator | Saturday 27 September 2025 21:28:40 +0000 (0:00:00.144) 0:01:01.510 **** 2025-09-27 21:28:45.736391 | orchestrator | ok: [testbed-node-5] => { 2025-09-27 21:28:45.736402 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-27 21:28:45.736415 | orchestrator | } 2025-09-27 21:28:45.736427 | orchestrator | 2025-09-27 21:28:45.736438 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-27 21:28:45.736450 | orchestrator | Saturday 27 September 2025 21:28:40 +0000 (0:00:00.145) 0:01:01.656 **** 2025-09-27 21:28:45.736462 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:28:45.736475 | orchestrator | 2025-09-27 21:28:45.736486 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-27 21:28:45.736499 | orchestrator | Saturday 27 September 2025 21:28:41 +0000 (0:00:00.564) 0:01:02.220 **** 2025-09-27 21:28:45.736510 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:28:45.736522 | orchestrator | 2025-09-27 21:28:45.736535 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-27 21:28:45.736547 | orchestrator | Saturday 27 September 2025 21:28:41 +0000 (0:00:00.552) 0:01:02.773 **** 2025-09-27 21:28:45.736558 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:28:45.736568 | orchestrator | 2025-09-27 21:28:45.736579 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-27 21:28:45.736590 | orchestrator | Saturday 27 September 2025 21:28:42 +0000 (0:00:00.536) 0:01:03.310 **** 2025-09-27 21:28:45.736601 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:28:45.736611 | orchestrator | 2025-09-27 21:28:45.736622 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-27 21:28:45.736633 | orchestrator | Saturday 27 September 2025 21:28:42 +0000 (0:00:00.339) 0:01:03.649 **** 2025-09-27 21:28:45.736644 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:45.736654 | orchestrator | 2025-09-27 21:28:45.736665 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-27 21:28:45.736676 | orchestrator | Saturday 27 September 2025 21:28:42 +0000 (0:00:00.112) 0:01:03.761 **** 2025-09-27 21:28:45.736694 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:45.736705 | orchestrator | 2025-09-27 21:28:45.736716 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-27 21:28:45.736727 | orchestrator | Saturday 27 September 2025 21:28:43 +0000 (0:00:00.115) 0:01:03.876 **** 2025-09-27 21:28:45.736738 | orchestrator | ok: [testbed-node-5] => { 2025-09-27 21:28:45.736749 | orchestrator |  "vgs_report": { 2025-09-27 21:28:45.736760 | orchestrator |  "vg": [] 2025-09-27 21:28:45.736786 | orchestrator |  } 2025-09-27 21:28:45.736797 | orchestrator | } 2025-09-27 21:28:45.736808 | orchestrator | 2025-09-27 21:28:45.736837 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-27 21:28:45.736849 | orchestrator | Saturday 27 September 2025 21:28:43 +0000 (0:00:00.135) 0:01:04.012 **** 2025-09-27 21:28:45.736860 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:45.736870 | orchestrator | 2025-09-27 21:28:45.736881 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-27 21:28:45.736892 | orchestrator | Saturday 27 September 2025 21:28:43 +0000 (0:00:00.129) 0:01:04.142 **** 2025-09-27 21:28:45.736903 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:45.736914 | orchestrator | 2025-09-27 21:28:45.736924 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-27 21:28:45.736935 | orchestrator | Saturday 27 September 2025 21:28:43 +0000 (0:00:00.136) 0:01:04.278 **** 2025-09-27 21:28:45.736946 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:45.736956 | orchestrator | 2025-09-27 21:28:45.736967 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-27 21:28:45.736978 | orchestrator | Saturday 27 September 2025 21:28:43 +0000 (0:00:00.134) 0:01:04.413 **** 2025-09-27 21:28:45.736989 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:45.736999 | orchestrator | 2025-09-27 21:28:45.737010 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-27 21:28:45.737021 | orchestrator | Saturday 27 September 2025 21:28:43 +0000 (0:00:00.133) 0:01:04.546 **** 2025-09-27 21:28:45.737032 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:45.737042 | orchestrator | 2025-09-27 21:28:45.737053 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-27 21:28:45.737064 | orchestrator | Saturday 27 September 2025 21:28:43 +0000 (0:00:00.138) 0:01:04.685 **** 2025-09-27 21:28:45.737075 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:45.737085 | orchestrator | 2025-09-27 21:28:45.737096 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-27 21:28:45.737107 | orchestrator | Saturday 27 September 2025 21:28:43 +0000 (0:00:00.135) 0:01:04.820 **** 2025-09-27 21:28:45.737118 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:45.737128 | orchestrator | 2025-09-27 21:28:45.737139 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-27 21:28:45.737150 | orchestrator | Saturday 27 September 2025 21:28:44 +0000 (0:00:00.126) 0:01:04.947 **** 2025-09-27 21:28:45.737161 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:45.737171 | orchestrator | 2025-09-27 21:28:45.737182 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-27 21:28:45.737193 | orchestrator | Saturday 27 September 2025 21:28:44 +0000 (0:00:00.146) 0:01:05.094 **** 2025-09-27 21:28:45.737204 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:45.737214 | orchestrator | 2025-09-27 21:28:45.737225 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-27 21:28:45.737241 | orchestrator | Saturday 27 September 2025 21:28:44 +0000 (0:00:00.331) 0:01:05.425 **** 2025-09-27 21:28:45.737252 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:45.737263 | orchestrator | 2025-09-27 21:28:45.737274 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-27 21:28:45.737284 | orchestrator | Saturday 27 September 2025 21:28:44 +0000 (0:00:00.141) 0:01:05.567 **** 2025-09-27 21:28:45.737295 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:45.737315 | orchestrator | 2025-09-27 21:28:45.737326 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-27 21:28:45.737337 | orchestrator | Saturday 27 September 2025 21:28:44 +0000 (0:00:00.132) 0:01:05.700 **** 2025-09-27 21:28:45.737348 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:45.737358 | orchestrator | 2025-09-27 21:28:45.737369 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-27 21:28:45.737380 | orchestrator | Saturday 27 September 2025 21:28:44 +0000 (0:00:00.139) 0:01:05.839 **** 2025-09-27 21:28:45.737391 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:45.737401 | orchestrator | 2025-09-27 21:28:45.737412 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-27 21:28:45.737423 | orchestrator | Saturday 27 September 2025 21:28:45 +0000 (0:00:00.142) 0:01:05.982 **** 2025-09-27 21:28:45.737433 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:45.737444 | orchestrator | 2025-09-27 21:28:45.737454 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-27 21:28:45.737465 | orchestrator | Saturday 27 September 2025 21:28:45 +0000 (0:00:00.139) 0:01:06.121 **** 2025-09-27 21:28:45.737476 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f61d8e2-65b7-57ca-8dcb-2a964e525246', 'data_vg': 'ceph-5f61d8e2-65b7-57ca-8dcb-2a964e525246'})  2025-09-27 21:28:45.737487 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2897d5b9-8afd-5dc0-8795-bd1d3af2960f', 'data_vg': 'ceph-2897d5b9-8afd-5dc0-8795-bd1d3af2960f'})  2025-09-27 21:28:45.737498 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:45.737509 | orchestrator | 2025-09-27 21:28:45.737520 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-27 21:28:45.737531 | orchestrator | Saturday 27 September 2025 21:28:45 +0000 (0:00:00.160) 0:01:06.281 **** 2025-09-27 21:28:45.737542 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f61d8e2-65b7-57ca-8dcb-2a964e525246', 'data_vg': 'ceph-5f61d8e2-65b7-57ca-8dcb-2a964e525246'})  2025-09-27 21:28:45.737553 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2897d5b9-8afd-5dc0-8795-bd1d3af2960f', 'data_vg': 'ceph-2897d5b9-8afd-5dc0-8795-bd1d3af2960f'})  2025-09-27 21:28:45.737564 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:45.737574 | orchestrator | 2025-09-27 21:28:45.737585 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-27 21:28:45.737596 | orchestrator | Saturday 27 September 2025 21:28:45 +0000 (0:00:00.155) 0:01:06.437 **** 2025-09-27 21:28:45.737614 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f61d8e2-65b7-57ca-8dcb-2a964e525246', 'data_vg': 'ceph-5f61d8e2-65b7-57ca-8dcb-2a964e525246'})  2025-09-27 21:28:48.693281 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2897d5b9-8afd-5dc0-8795-bd1d3af2960f', 'data_vg': 'ceph-2897d5b9-8afd-5dc0-8795-bd1d3af2960f'})  2025-09-27 21:28:48.693372 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:48.693384 | orchestrator | 2025-09-27 21:28:48.693393 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-27 21:28:48.693402 | orchestrator | Saturday 27 September 2025 21:28:45 +0000 (0:00:00.156) 0:01:06.593 **** 2025-09-27 21:28:48.693410 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f61d8e2-65b7-57ca-8dcb-2a964e525246', 'data_vg': 'ceph-5f61d8e2-65b7-57ca-8dcb-2a964e525246'})  2025-09-27 21:28:48.693418 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2897d5b9-8afd-5dc0-8795-bd1d3af2960f', 'data_vg': 'ceph-2897d5b9-8afd-5dc0-8795-bd1d3af2960f'})  2025-09-27 21:28:48.693425 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:48.693433 | orchestrator | 2025-09-27 21:28:48.693440 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-27 21:28:48.693447 | orchestrator | Saturday 27 September 2025 21:28:45 +0000 (0:00:00.145) 0:01:06.738 **** 2025-09-27 21:28:48.693455 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f61d8e2-65b7-57ca-8dcb-2a964e525246', 'data_vg': 'ceph-5f61d8e2-65b7-57ca-8dcb-2a964e525246'})  2025-09-27 21:28:48.693481 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2897d5b9-8afd-5dc0-8795-bd1d3af2960f', 'data_vg': 'ceph-2897d5b9-8afd-5dc0-8795-bd1d3af2960f'})  2025-09-27 21:28:48.693489 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:48.693496 | orchestrator | 2025-09-27 21:28:48.693503 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-27 21:28:48.693511 | orchestrator | Saturday 27 September 2025 21:28:46 +0000 (0:00:00.161) 0:01:06.900 **** 2025-09-27 21:28:48.693518 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f61d8e2-65b7-57ca-8dcb-2a964e525246', 'data_vg': 'ceph-5f61d8e2-65b7-57ca-8dcb-2a964e525246'})  2025-09-27 21:28:48.693525 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2897d5b9-8afd-5dc0-8795-bd1d3af2960f', 'data_vg': 'ceph-2897d5b9-8afd-5dc0-8795-bd1d3af2960f'})  2025-09-27 21:28:48.693532 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:48.693539 | orchestrator | 2025-09-27 21:28:48.693546 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-27 21:28:48.693554 | orchestrator | Saturday 27 September 2025 21:28:46 +0000 (0:00:00.141) 0:01:07.042 **** 2025-09-27 21:28:48.693561 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f61d8e2-65b7-57ca-8dcb-2a964e525246', 'data_vg': 'ceph-5f61d8e2-65b7-57ca-8dcb-2a964e525246'})  2025-09-27 21:28:48.693568 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2897d5b9-8afd-5dc0-8795-bd1d3af2960f', 'data_vg': 'ceph-2897d5b9-8afd-5dc0-8795-bd1d3af2960f'})  2025-09-27 21:28:48.693576 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:48.693583 | orchestrator | 2025-09-27 21:28:48.693590 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-27 21:28:48.693597 | orchestrator | Saturday 27 September 2025 21:28:46 +0000 (0:00:00.338) 0:01:07.380 **** 2025-09-27 21:28:48.693604 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f61d8e2-65b7-57ca-8dcb-2a964e525246', 'data_vg': 'ceph-5f61d8e2-65b7-57ca-8dcb-2a964e525246'})  2025-09-27 21:28:48.693611 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2897d5b9-8afd-5dc0-8795-bd1d3af2960f', 'data_vg': 'ceph-2897d5b9-8afd-5dc0-8795-bd1d3af2960f'})  2025-09-27 21:28:48.693619 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:48.693626 | orchestrator | 2025-09-27 21:28:48.693633 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-27 21:28:48.693640 | orchestrator | Saturday 27 September 2025 21:28:46 +0000 (0:00:00.157) 0:01:07.537 **** 2025-09-27 21:28:48.693647 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:28:48.693655 | orchestrator | 2025-09-27 21:28:48.693662 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-27 21:28:48.693669 | orchestrator | Saturday 27 September 2025 21:28:47 +0000 (0:00:00.536) 0:01:08.074 **** 2025-09-27 21:28:48.693676 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:28:48.693683 | orchestrator | 2025-09-27 21:28:48.693690 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-27 21:28:48.693697 | orchestrator | Saturday 27 September 2025 21:28:47 +0000 (0:00:00.521) 0:01:08.595 **** 2025-09-27 21:28:48.693705 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:28:48.693711 | orchestrator | 2025-09-27 21:28:48.693718 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-27 21:28:48.693726 | orchestrator | Saturday 27 September 2025 21:28:47 +0000 (0:00:00.154) 0:01:08.749 **** 2025-09-27 21:28:48.693733 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-2897d5b9-8afd-5dc0-8795-bd1d3af2960f', 'vg_name': 'ceph-2897d5b9-8afd-5dc0-8795-bd1d3af2960f'}) 2025-09-27 21:28:48.693742 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-5f61d8e2-65b7-57ca-8dcb-2a964e525246', 'vg_name': 'ceph-5f61d8e2-65b7-57ca-8dcb-2a964e525246'}) 2025-09-27 21:28:48.693749 | orchestrator | 2025-09-27 21:28:48.693756 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-27 21:28:48.693769 | orchestrator | Saturday 27 September 2025 21:28:48 +0000 (0:00:00.181) 0:01:08.931 **** 2025-09-27 21:28:48.693789 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f61d8e2-65b7-57ca-8dcb-2a964e525246', 'data_vg': 'ceph-5f61d8e2-65b7-57ca-8dcb-2a964e525246'})  2025-09-27 21:28:48.693797 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2897d5b9-8afd-5dc0-8795-bd1d3af2960f', 'data_vg': 'ceph-2897d5b9-8afd-5dc0-8795-bd1d3af2960f'})  2025-09-27 21:28:48.693804 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:48.693841 | orchestrator | 2025-09-27 21:28:48.693849 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-27 21:28:48.693857 | orchestrator | Saturday 27 September 2025 21:28:48 +0000 (0:00:00.152) 0:01:09.083 **** 2025-09-27 21:28:48.693865 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f61d8e2-65b7-57ca-8dcb-2a964e525246', 'data_vg': 'ceph-5f61d8e2-65b7-57ca-8dcb-2a964e525246'})  2025-09-27 21:28:48.693873 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2897d5b9-8afd-5dc0-8795-bd1d3af2960f', 'data_vg': 'ceph-2897d5b9-8afd-5dc0-8795-bd1d3af2960f'})  2025-09-27 21:28:48.693881 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:48.693889 | orchestrator | 2025-09-27 21:28:48.693897 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-27 21:28:48.693905 | orchestrator | Saturday 27 September 2025 21:28:48 +0000 (0:00:00.154) 0:01:09.238 **** 2025-09-27 21:28:48.693913 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f61d8e2-65b7-57ca-8dcb-2a964e525246', 'data_vg': 'ceph-5f61d8e2-65b7-57ca-8dcb-2a964e525246'})  2025-09-27 21:28:48.693936 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2897d5b9-8afd-5dc0-8795-bd1d3af2960f', 'data_vg': 'ceph-2897d5b9-8afd-5dc0-8795-bd1d3af2960f'})  2025-09-27 21:28:48.693944 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:28:48.693952 | orchestrator | 2025-09-27 21:28:48.693960 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-27 21:28:48.693968 | orchestrator | Saturday 27 September 2025 21:28:48 +0000 (0:00:00.152) 0:01:09.390 **** 2025-09-27 21:28:48.693976 | orchestrator | ok: [testbed-node-5] => { 2025-09-27 21:28:48.693984 | orchestrator |  "lvm_report": { 2025-09-27 21:28:48.693992 | orchestrator |  "lv": [ 2025-09-27 21:28:48.694000 | orchestrator |  { 2025-09-27 21:28:48.694008 | orchestrator |  "lv_name": "osd-block-2897d5b9-8afd-5dc0-8795-bd1d3af2960f", 2025-09-27 21:28:48.694052 | orchestrator |  "vg_name": "ceph-2897d5b9-8afd-5dc0-8795-bd1d3af2960f" 2025-09-27 21:28:48.694061 | orchestrator |  }, 2025-09-27 21:28:48.694069 | orchestrator |  { 2025-09-27 21:28:48.694077 | orchestrator |  "lv_name": "osd-block-5f61d8e2-65b7-57ca-8dcb-2a964e525246", 2025-09-27 21:28:48.694085 | orchestrator |  "vg_name": "ceph-5f61d8e2-65b7-57ca-8dcb-2a964e525246" 2025-09-27 21:28:48.694093 | orchestrator |  } 2025-09-27 21:28:48.694101 | orchestrator |  ], 2025-09-27 21:28:48.694109 | orchestrator |  "pv": [ 2025-09-27 21:28:48.694116 | orchestrator |  { 2025-09-27 21:28:48.694124 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-27 21:28:48.694132 | orchestrator |  "vg_name": "ceph-5f61d8e2-65b7-57ca-8dcb-2a964e525246" 2025-09-27 21:28:48.694139 | orchestrator |  }, 2025-09-27 21:28:48.694147 | orchestrator |  { 2025-09-27 21:28:48.694155 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-27 21:28:48.694163 | orchestrator |  "vg_name": "ceph-2897d5b9-8afd-5dc0-8795-bd1d3af2960f" 2025-09-27 21:28:48.694170 | orchestrator |  } 2025-09-27 21:28:48.694178 | orchestrator |  ] 2025-09-27 21:28:48.694186 | orchestrator |  } 2025-09-27 21:28:48.694194 | orchestrator | } 2025-09-27 21:28:48.694202 | orchestrator | 2025-09-27 21:28:48.694210 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:28:48.694225 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-27 21:28:48.694232 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-27 21:28:48.694240 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-27 21:28:48.694247 | orchestrator | 2025-09-27 21:28:48.694254 | orchestrator | 2025-09-27 21:28:48.694261 | orchestrator | 2025-09-27 21:28:48.694268 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:28:48.694275 | orchestrator | Saturday 27 September 2025 21:28:48 +0000 (0:00:00.139) 0:01:09.530 **** 2025-09-27 21:28:48.694283 | orchestrator | =============================================================================== 2025-09-27 21:28:48.694290 | orchestrator | Create block VGs -------------------------------------------------------- 5.44s 2025-09-27 21:28:48.694297 | orchestrator | Create block LVs -------------------------------------------------------- 4.02s 2025-09-27 21:28:48.694304 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.94s 2025-09-27 21:28:48.694311 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.60s 2025-09-27 21:28:48.694318 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.59s 2025-09-27 21:28:48.694325 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.58s 2025-09-27 21:28:48.694332 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.54s 2025-09-27 21:28:48.694339 | orchestrator | Add known partitions to the list of available block devices ------------- 1.42s 2025-09-27 21:28:48.694352 | orchestrator | Add known links to the list of available block devices ------------------ 1.16s 2025-09-27 21:28:49.054553 | orchestrator | Add known partitions to the list of available block devices ------------- 0.97s 2025-09-27 21:28:49.054660 | orchestrator | Add known partitions to the list of available block devices ------------- 0.81s 2025-09-27 21:28:49.054684 | orchestrator | Print LVM report data --------------------------------------------------- 0.79s 2025-09-27 21:28:49.054703 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.73s 2025-09-27 21:28:49.054720 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.70s 2025-09-27 21:28:49.054738 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.67s 2025-09-27 21:28:49.054755 | orchestrator | Print number of OSDs wanted per DB VG ----------------------------------- 0.66s 2025-09-27 21:28:49.054774 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.66s 2025-09-27 21:28:49.054792 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.65s 2025-09-27 21:28:49.054837 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.65s 2025-09-27 21:28:49.054850 | orchestrator | Get initial list of available block devices ----------------------------- 0.63s 2025-09-27 21:29:01.290927 | orchestrator | 2025-09-27 21:29:01 | INFO  | Task c4422b17-613b-4723-ad1b-aa36b32f542e (facts) was prepared for execution. 2025-09-27 21:29:01.291037 | orchestrator | 2025-09-27 21:29:01 | INFO  | It takes a moment until task c4422b17-613b-4723-ad1b-aa36b32f542e (facts) has been started and output is visible here. 2025-09-27 21:29:12.385609 | orchestrator | 2025-09-27 21:29:12.385701 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-27 21:29:12.385719 | orchestrator | 2025-09-27 21:29:12.385732 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-27 21:29:12.385744 | orchestrator | Saturday 27 September 2025 21:29:04 +0000 (0:00:00.197) 0:00:00.197 **** 2025-09-27 21:29:12.385756 | orchestrator | ok: [testbed-manager] 2025-09-27 21:29:12.385768 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:29:12.385833 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:29:12.385846 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:29:12.385857 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:29:12.385882 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:29:12.385905 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:29:12.385917 | orchestrator | 2025-09-27 21:29:12.385928 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-27 21:29:12.385939 | orchestrator | Saturday 27 September 2025 21:29:05 +0000 (0:00:00.896) 0:00:01.094 **** 2025-09-27 21:29:12.385960 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:29:12.385972 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:29:12.385984 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:29:12.385995 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:29:12.386006 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:29:12.386052 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:29:12.386065 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:29:12.386076 | orchestrator | 2025-09-27 21:29:12.386087 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-27 21:29:12.386098 | orchestrator | 2025-09-27 21:29:12.386109 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-27 21:29:12.386120 | orchestrator | Saturday 27 September 2025 21:29:06 +0000 (0:00:01.062) 0:00:02.157 **** 2025-09-27 21:29:12.386131 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:29:12.386142 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:29:12.386153 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:29:12.386164 | orchestrator | ok: [testbed-manager] 2025-09-27 21:29:12.386176 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:29:12.386188 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:29:12.386200 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:29:12.386211 | orchestrator | 2025-09-27 21:29:12.386224 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-27 21:29:12.386236 | orchestrator | 2025-09-27 21:29:12.386248 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-27 21:29:12.386261 | orchestrator | Saturday 27 September 2025 21:29:11 +0000 (0:00:04.784) 0:00:06.941 **** 2025-09-27 21:29:12.386272 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:29:12.386285 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:29:12.386296 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:29:12.386308 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:29:12.386320 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:29:12.386331 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:29:12.386343 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:29:12.386355 | orchestrator | 2025-09-27 21:29:12.386367 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:29:12.386379 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:29:12.386393 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:29:12.386405 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:29:12.386417 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:29:12.386429 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:29:12.386441 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:29:12.386453 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:29:12.386473 | orchestrator | 2025-09-27 21:29:12.386485 | orchestrator | 2025-09-27 21:29:12.386497 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:29:12.386509 | orchestrator | Saturday 27 September 2025 21:29:12 +0000 (0:00:00.500) 0:00:07.441 **** 2025-09-27 21:29:12.386522 | orchestrator | =============================================================================== 2025-09-27 21:29:12.386533 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.78s 2025-09-27 21:29:12.386544 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.06s 2025-09-27 21:29:12.386555 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.90s 2025-09-27 21:29:12.386566 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2025-09-27 21:29:24.721573 | orchestrator | 2025-09-27 21:29:24 | INFO  | Task 18bd9399-4a6c-46b7-b96e-c484e5d6f351 (frr) was prepared for execution. 2025-09-27 21:29:24.721667 | orchestrator | 2025-09-27 21:29:24 | INFO  | It takes a moment until task 18bd9399-4a6c-46b7-b96e-c484e5d6f351 (frr) has been started and output is visible here. 2025-09-27 21:29:57.150789 | orchestrator | 2025-09-27 21:29:57.150904 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-09-27 21:29:57.150927 | orchestrator | 2025-09-27 21:29:57.150945 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-09-27 21:29:57.150962 | orchestrator | Saturday 27 September 2025 21:29:28 +0000 (0:00:00.177) 0:00:00.177 **** 2025-09-27 21:29:57.150979 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-09-27 21:29:57.150998 | orchestrator | 2025-09-27 21:29:57.151016 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-09-27 21:29:57.151034 | orchestrator | Saturday 27 September 2025 21:29:28 +0000 (0:00:00.205) 0:00:00.383 **** 2025-09-27 21:29:57.151051 | orchestrator | changed: [testbed-manager] 2025-09-27 21:29:57.151069 | orchestrator | 2025-09-27 21:29:57.151086 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-09-27 21:29:57.151103 | orchestrator | Saturday 27 September 2025 21:29:29 +0000 (0:00:00.974) 0:00:01.357 **** 2025-09-27 21:29:57.151120 | orchestrator | changed: [testbed-manager] 2025-09-27 21:29:57.151137 | orchestrator | 2025-09-27 21:29:57.151173 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-09-27 21:29:57.151191 | orchestrator | Saturday 27 September 2025 21:29:46 +0000 (0:00:17.268) 0:00:18.626 **** 2025-09-27 21:29:57.151209 | orchestrator | ok: [testbed-manager] 2025-09-27 21:29:57.151228 | orchestrator | 2025-09-27 21:29:57.151245 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-09-27 21:29:57.151261 | orchestrator | Saturday 27 September 2025 21:29:48 +0000 (0:00:01.243) 0:00:19.870 **** 2025-09-27 21:29:57.151278 | orchestrator | changed: [testbed-manager] 2025-09-27 21:29:57.151295 | orchestrator | 2025-09-27 21:29:57.151314 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-09-27 21:29:57.151333 | orchestrator | Saturday 27 September 2025 21:29:48 +0000 (0:00:00.928) 0:00:20.799 **** 2025-09-27 21:29:57.151351 | orchestrator | ok: [testbed-manager] 2025-09-27 21:29:57.151368 | orchestrator | 2025-09-27 21:29:57.151386 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-09-27 21:29:57.151405 | orchestrator | Saturday 27 September 2025 21:29:50 +0000 (0:00:01.193) 0:00:21.992 **** 2025-09-27 21:29:57.151465 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-27 21:29:57.151481 | orchestrator | 2025-09-27 21:29:57.151499 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-09-27 21:29:57.151516 | orchestrator | Saturday 27 September 2025 21:29:50 +0000 (0:00:00.789) 0:00:22.782 **** 2025-09-27 21:29:57.151534 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:29:57.151552 | orchestrator | 2025-09-27 21:29:57.151569 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-09-27 21:29:57.151609 | orchestrator | Saturday 27 September 2025 21:29:51 +0000 (0:00:00.163) 0:00:22.945 **** 2025-09-27 21:29:57.151620 | orchestrator | changed: [testbed-manager] 2025-09-27 21:29:57.151631 | orchestrator | 2025-09-27 21:29:57.151642 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-09-27 21:29:57.151653 | orchestrator | Saturday 27 September 2025 21:29:52 +0000 (0:00:00.921) 0:00:23.867 **** 2025-09-27 21:29:57.151663 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-09-27 21:29:57.151673 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-09-27 21:29:57.151684 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-09-27 21:29:57.151694 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-09-27 21:29:57.151703 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-09-27 21:29:57.151713 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-09-27 21:29:57.151723 | orchestrator | 2025-09-27 21:29:57.151733 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-09-27 21:29:57.151742 | orchestrator | Saturday 27 September 2025 21:29:54 +0000 (0:00:02.120) 0:00:25.988 **** 2025-09-27 21:29:57.151781 | orchestrator | ok: [testbed-manager] 2025-09-27 21:29:57.151792 | orchestrator | 2025-09-27 21:29:57.151801 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-09-27 21:29:57.151811 | orchestrator | Saturday 27 September 2025 21:29:55 +0000 (0:00:01.403) 0:00:27.392 **** 2025-09-27 21:29:57.151820 | orchestrator | changed: [testbed-manager] 2025-09-27 21:29:57.151830 | orchestrator | 2025-09-27 21:29:57.151840 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:29:57.151850 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-27 21:29:57.151861 | orchestrator | 2025-09-27 21:29:57.151871 | orchestrator | 2025-09-27 21:29:57.151880 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:29:57.151890 | orchestrator | Saturday 27 September 2025 21:29:56 +0000 (0:00:01.322) 0:00:28.714 **** 2025-09-27 21:29:57.151900 | orchestrator | =============================================================================== 2025-09-27 21:29:57.151909 | orchestrator | osism.services.frr : Install frr package ------------------------------- 17.27s 2025-09-27 21:29:57.151919 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.12s 2025-09-27 21:29:57.151928 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.40s 2025-09-27 21:29:57.151938 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.32s 2025-09-27 21:29:57.151968 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.24s 2025-09-27 21:29:57.151979 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.19s 2025-09-27 21:29:57.151988 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 0.97s 2025-09-27 21:29:57.151998 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.93s 2025-09-27 21:29:57.152007 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 0.92s 2025-09-27 21:29:57.152017 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.79s 2025-09-27 21:29:57.152027 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.21s 2025-09-27 21:29:57.152036 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.16s 2025-09-27 21:29:57.438833 | orchestrator | 2025-09-27 21:29:57.442173 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat Sep 27 21:29:57 UTC 2025 2025-09-27 21:29:57.442242 | orchestrator | 2025-09-27 21:29:59.317108 | orchestrator | 2025-09-27 21:29:59 | INFO  | Collection nutshell is prepared for execution 2025-09-27 21:29:59.317190 | orchestrator | 2025-09-27 21:29:59 | INFO  | D [0] - dotfiles 2025-09-27 21:30:09.424530 | orchestrator | 2025-09-27 21:30:09 | INFO  | D [0] - homer 2025-09-27 21:30:09.424723 | orchestrator | 2025-09-27 21:30:09 | INFO  | D [0] - netdata 2025-09-27 21:30:09.424769 | orchestrator | 2025-09-27 21:30:09 | INFO  | D [0] - openstackclient 2025-09-27 21:30:09.424782 | orchestrator | 2025-09-27 21:30:09 | INFO  | D [0] - phpmyadmin 2025-09-27 21:30:09.424793 | orchestrator | 2025-09-27 21:30:09 | INFO  | A [0] - common 2025-09-27 21:30:09.424804 | orchestrator | 2025-09-27 21:30:09 | INFO  | A [1] -- loadbalancer 2025-09-27 21:30:09.424815 | orchestrator | 2025-09-27 21:30:09 | INFO  | D [2] --- opensearch 2025-09-27 21:30:09.424838 | orchestrator | 2025-09-27 21:30:09 | INFO  | A [2] --- mariadb-ng 2025-09-27 21:30:09.425048 | orchestrator | 2025-09-27 21:30:09 | INFO  | D [3] ---- horizon 2025-09-27 21:30:09.425070 | orchestrator | 2025-09-27 21:30:09 | INFO  | A [3] ---- keystone 2025-09-27 21:30:09.425328 | orchestrator | 2025-09-27 21:30:09 | INFO  | A [4] ----- neutron 2025-09-27 21:30:09.425649 | orchestrator | 2025-09-27 21:30:09 | INFO  | D [5] ------ wait-for-nova 2025-09-27 21:30:09.425672 | orchestrator | 2025-09-27 21:30:09 | INFO  | A [5] ------ octavia 2025-09-27 21:30:09.426585 | orchestrator | 2025-09-27 21:30:09 | INFO  | D [4] ----- barbican 2025-09-27 21:30:09.426837 | orchestrator | 2025-09-27 21:30:09 | INFO  | D [4] ----- designate 2025-09-27 21:30:09.426859 | orchestrator | 2025-09-27 21:30:09 | INFO  | D [4] ----- ironic 2025-09-27 21:30:09.427038 | orchestrator | 2025-09-27 21:30:09 | INFO  | D [4] ----- placement 2025-09-27 21:30:09.428077 | orchestrator | 2025-09-27 21:30:09 | INFO  | D [4] ----- magnum 2025-09-27 21:30:09.428101 | orchestrator | 2025-09-27 21:30:09 | INFO  | A [1] -- openvswitch 2025-09-27 21:30:09.428301 | orchestrator | 2025-09-27 21:30:09 | INFO  | D [2] --- ovn 2025-09-27 21:30:09.428599 | orchestrator | 2025-09-27 21:30:09 | INFO  | D [1] -- memcached 2025-09-27 21:30:09.428619 | orchestrator | 2025-09-27 21:30:09 | INFO  | D [1] -- redis 2025-09-27 21:30:09.428891 | orchestrator | 2025-09-27 21:30:09 | INFO  | D [1] -- rabbitmq-ng 2025-09-27 21:30:09.429190 | orchestrator | 2025-09-27 21:30:09 | INFO  | A [0] - kubernetes 2025-09-27 21:30:09.430847 | orchestrator | 2025-09-27 21:30:09 | INFO  | D [1] -- kubeconfig 2025-09-27 21:30:09.431100 | orchestrator | 2025-09-27 21:30:09 | INFO  | A [1] -- copy-kubeconfig 2025-09-27 21:30:09.431407 | orchestrator | 2025-09-27 21:30:09 | INFO  | A [0] - ceph 2025-09-27 21:30:09.433218 | orchestrator | 2025-09-27 21:30:09 | INFO  | A [1] -- ceph-pools 2025-09-27 21:30:09.433244 | orchestrator | 2025-09-27 21:30:09 | INFO  | A [2] --- copy-ceph-keys 2025-09-27 21:30:09.433419 | orchestrator | 2025-09-27 21:30:09 | INFO  | A [3] ---- cephclient 2025-09-27 21:30:09.433732 | orchestrator | 2025-09-27 21:30:09 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-09-27 21:30:09.433811 | orchestrator | 2025-09-27 21:30:09 | INFO  | A [4] ----- wait-for-keystone 2025-09-27 21:30:09.434060 | orchestrator | 2025-09-27 21:30:09 | INFO  | D [5] ------ kolla-ceph-rgw 2025-09-27 21:30:09.434301 | orchestrator | 2025-09-27 21:30:09 | INFO  | D [5] ------ glance 2025-09-27 21:30:09.434322 | orchestrator | 2025-09-27 21:30:09 | INFO  | D [5] ------ cinder 2025-09-27 21:30:09.434619 | orchestrator | 2025-09-27 21:30:09 | INFO  | D [5] ------ nova 2025-09-27 21:30:09.435020 | orchestrator | 2025-09-27 21:30:09 | INFO  | A [4] ----- prometheus 2025-09-27 21:30:09.435260 | orchestrator | 2025-09-27 21:30:09 | INFO  | D [5] ------ grafana 2025-09-27 21:30:09.632080 | orchestrator | 2025-09-27 21:30:09 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-09-27 21:30:09.632156 | orchestrator | 2025-09-27 21:30:09 | INFO  | Tasks are running in the background 2025-09-27 21:30:11.961619 | orchestrator | 2025-09-27 21:30:11 | INFO  | No task IDs specified, wait for all currently running tasks 2025-09-27 21:30:14.070758 | orchestrator | 2025-09-27 21:30:14 | INFO  | Task debefcd1-cfab-4f93-8e9a-d450959c8a06 is in state STARTED 2025-09-27 21:30:14.071116 | orchestrator | 2025-09-27 21:30:14 | INFO  | Task bb2e32ce-e5f5-4fb1-a179-fd0be46289d8 is in state STARTED 2025-09-27 21:30:14.073271 | orchestrator | 2025-09-27 21:30:14 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:30:14.073700 | orchestrator | 2025-09-27 21:30:14 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:30:14.074514 | orchestrator | 2025-09-27 21:30:14 | INFO  | Task 56ea0a69-85d5-4747-acd8-94c8cd350455 is in state STARTED 2025-09-27 21:30:14.077397 | orchestrator | 2025-09-27 21:30:14 | INFO  | Task 2e2516eb-ef92-468d-9077-6c42112b1b3a is in state STARTED 2025-09-27 21:30:14.077428 | orchestrator | 2025-09-27 21:30:14 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:30:14.077439 | orchestrator | 2025-09-27 21:30:14 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:30:17.124272 | orchestrator | 2025-09-27 21:30:17 | INFO  | Task debefcd1-cfab-4f93-8e9a-d450959c8a06 is in state STARTED 2025-09-27 21:30:17.124582 | orchestrator | 2025-09-27 21:30:17 | INFO  | Task bb2e32ce-e5f5-4fb1-a179-fd0be46289d8 is in state STARTED 2025-09-27 21:30:17.127234 | orchestrator | 2025-09-27 21:30:17 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:30:17.127268 | orchestrator | 2025-09-27 21:30:17 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:30:17.128154 | orchestrator | 2025-09-27 21:30:17 | INFO  | Task 56ea0a69-85d5-4747-acd8-94c8cd350455 is in state STARTED 2025-09-27 21:30:17.128864 | orchestrator | 2025-09-27 21:30:17 | INFO  | Task 2e2516eb-ef92-468d-9077-6c42112b1b3a is in state STARTED 2025-09-27 21:30:17.130178 | orchestrator | 2025-09-27 21:30:17 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:30:17.130208 | orchestrator | 2025-09-27 21:30:17 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:30:20.154156 | orchestrator | 2025-09-27 21:30:20 | INFO  | Task debefcd1-cfab-4f93-8e9a-d450959c8a06 is in state STARTED 2025-09-27 21:30:20.154864 | orchestrator | 2025-09-27 21:30:20 | INFO  | Task bb2e32ce-e5f5-4fb1-a179-fd0be46289d8 is in state STARTED 2025-09-27 21:30:20.156186 | orchestrator | 2025-09-27 21:30:20 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:30:20.156832 | orchestrator | 2025-09-27 21:30:20 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:30:20.158253 | orchestrator | 2025-09-27 21:30:20 | INFO  | Task 56ea0a69-85d5-4747-acd8-94c8cd350455 is in state STARTED 2025-09-27 21:30:20.160075 | orchestrator | 2025-09-27 21:30:20 | INFO  | Task 2e2516eb-ef92-468d-9077-6c42112b1b3a is in state STARTED 2025-09-27 21:30:20.160579 | orchestrator | 2025-09-27 21:30:20 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:30:20.160721 | orchestrator | 2025-09-27 21:30:20 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:30:23.294267 | orchestrator | 2025-09-27 21:30:23 | INFO  | Task debefcd1-cfab-4f93-8e9a-d450959c8a06 is in state STARTED 2025-09-27 21:30:23.294393 | orchestrator | 2025-09-27 21:30:23 | INFO  | Task bb2e32ce-e5f5-4fb1-a179-fd0be46289d8 is in state STARTED 2025-09-27 21:30:23.294418 | orchestrator | 2025-09-27 21:30:23 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:30:23.294431 | orchestrator | 2025-09-27 21:30:23 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:30:23.294441 | orchestrator | 2025-09-27 21:30:23 | INFO  | Task 56ea0a69-85d5-4747-acd8-94c8cd350455 is in state STARTED 2025-09-27 21:30:23.294453 | orchestrator | 2025-09-27 21:30:23 | INFO  | Task 2e2516eb-ef92-468d-9077-6c42112b1b3a is in state STARTED 2025-09-27 21:30:23.294463 | orchestrator | 2025-09-27 21:30:23 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:30:23.294479 | orchestrator | 2025-09-27 21:30:23 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:30:26.585079 | orchestrator | 2025-09-27 21:30:26 | INFO  | Task debefcd1-cfab-4f93-8e9a-d450959c8a06 is in state STARTED 2025-09-27 21:30:26.585158 | orchestrator | 2025-09-27 21:30:26 | INFO  | Task bb2e32ce-e5f5-4fb1-a179-fd0be46289d8 is in state STARTED 2025-09-27 21:30:26.589260 | orchestrator | 2025-09-27 21:30:26 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:30:26.611262 | orchestrator | 2025-09-27 21:30:26 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:30:26.776554 | orchestrator | 2025-09-27 21:30:26 | INFO  | Task 56ea0a69-85d5-4747-acd8-94c8cd350455 is in state STARTED 2025-09-27 21:30:26.777772 | orchestrator | 2025-09-27 21:30:26 | INFO  | Task 2e2516eb-ef92-468d-9077-6c42112b1b3a is in state STARTED 2025-09-27 21:30:26.778544 | orchestrator | 2025-09-27 21:30:26 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:30:26.778925 | orchestrator | 2025-09-27 21:30:26 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:30:29.843985 | orchestrator | 2025-09-27 21:30:29 | INFO  | Task debefcd1-cfab-4f93-8e9a-d450959c8a06 is in state STARTED 2025-09-27 21:30:29.851692 | orchestrator | 2025-09-27 21:30:29 | INFO  | Task bb2e32ce-e5f5-4fb1-a179-fd0be46289d8 is in state STARTED 2025-09-27 21:30:29.852485 | orchestrator | 2025-09-27 21:30:29 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:30:29.854555 | orchestrator | 2025-09-27 21:30:29 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:30:29.855131 | orchestrator | 2025-09-27 21:30:29 | INFO  | Task 56ea0a69-85d5-4747-acd8-94c8cd350455 is in state STARTED 2025-09-27 21:30:29.856428 | orchestrator | 2025-09-27 21:30:29 | INFO  | Task 2e2516eb-ef92-468d-9077-6c42112b1b3a is in state STARTED 2025-09-27 21:30:29.868879 | orchestrator | 2025-09-27 21:30:29 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:30:29.868961 | orchestrator | 2025-09-27 21:30:29 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:30:32.947186 | orchestrator | 2025-09-27 21:30:32 | INFO  | Task debefcd1-cfab-4f93-8e9a-d450959c8a06 is in state STARTED 2025-09-27 21:30:32.947265 | orchestrator | 2025-09-27 21:30:32 | INFO  | Task bb2e32ce-e5f5-4fb1-a179-fd0be46289d8 is in state STARTED 2025-09-27 21:30:32.947279 | orchestrator | 2025-09-27 21:30:32 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:30:32.947290 | orchestrator | 2025-09-27 21:30:32 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:30:32.947328 | orchestrator | 2025-09-27 21:30:32 | INFO  | Task 56ea0a69-85d5-4747-acd8-94c8cd350455 is in state STARTED 2025-09-27 21:30:32.947340 | orchestrator | 2025-09-27 21:30:32 | INFO  | Task 2e2516eb-ef92-468d-9077-6c42112b1b3a is in state STARTED 2025-09-27 21:30:32.947351 | orchestrator | 2025-09-27 21:30:32 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:30:32.947362 | orchestrator | 2025-09-27 21:30:32 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:30:36.050255 | orchestrator | 2025-09-27 21:30:36.050331 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-09-27 21:30:36.050346 | orchestrator | 2025-09-27 21:30:36.050358 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-09-27 21:30:36.050369 | orchestrator | Saturday 27 September 2025 21:30:22 +0000 (0:00:00.881) 0:00:00.881 **** 2025-09-27 21:30:36.050380 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:30:36.050392 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:30:36.050403 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:30:36.050414 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:30:36.050425 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:30:36.050435 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:30:36.050446 | orchestrator | changed: [testbed-manager] 2025-09-27 21:30:36.050457 | orchestrator | 2025-09-27 21:30:36.050468 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-09-27 21:30:36.050479 | orchestrator | Saturday 27 September 2025 21:30:25 +0000 (0:00:03.875) 0:00:04.756 **** 2025-09-27 21:30:36.050490 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-27 21:30:36.050502 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-27 21:30:36.050512 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-27 21:30:36.050523 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-27 21:30:36.050534 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-27 21:30:36.050545 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-27 21:30:36.050556 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-27 21:30:36.050566 | orchestrator | 2025-09-27 21:30:36.050577 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-09-27 21:30:36.050589 | orchestrator | Saturday 27 September 2025 21:30:27 +0000 (0:00:01.226) 0:00:05.983 **** 2025-09-27 21:30:36.050611 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-27 21:30:26.646753', 'end': '2025-09-27 21:30:26.657657', 'delta': '0:00:00.010904', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-27 21:30:36.050631 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-27 21:30:26.612528', 'end': '2025-09-27 21:30:26.617848', 'delta': '0:00:00.005320', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-27 21:30:36.050663 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-27 21:30:26.675440', 'end': '2025-09-27 21:30:26.686728', 'delta': '0:00:00.011288', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-27 21:30:36.050691 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-27 21:30:26.582636', 'end': '2025-09-27 21:30:26.586281', 'delta': '0:00:00.003645', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-27 21:30:36.050704 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-27 21:30:26.909163', 'end': '2025-09-27 21:30:26.917314', 'delta': '0:00:00.008151', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-27 21:30:36.050749 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-27 21:30:26.934882', 'end': '2025-09-27 21:30:26.941843', 'delta': '0:00:00.006961', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-27 21:30:36.050767 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-27 21:30:26.981917', 'end': '2025-09-27 21:30:26.992774', 'delta': '0:00:00.010857', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-27 21:30:36.050792 | orchestrator | 2025-09-27 21:30:36.050804 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-09-27 21:30:36.050817 | orchestrator | Saturday 27 September 2025 21:30:30 +0000 (0:00:03.580) 0:00:09.563 **** 2025-09-27 21:30:36.050830 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-27 21:30:36.050843 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-27 21:30:36.050855 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-27 21:30:36.050867 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-27 21:30:36.050878 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-27 21:30:36.050889 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-27 21:30:36.050900 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-27 21:30:36.050910 | orchestrator | 2025-09-27 21:30:36.050921 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-09-27 21:30:36.050933 | orchestrator | Saturday 27 September 2025 21:30:32 +0000 (0:00:01.683) 0:00:11.247 **** 2025-09-27 21:30:36.050944 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-09-27 21:30:36.050954 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-09-27 21:30:36.050965 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-09-27 21:30:36.050976 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-09-27 21:30:36.050987 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-09-27 21:30:36.050998 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-09-27 21:30:36.051008 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-09-27 21:30:36.051019 | orchestrator | 2025-09-27 21:30:36.051030 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:30:36.051049 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:30:36.051062 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:30:36.051073 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:30:36.051084 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:30:36.051095 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:30:36.051106 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:30:36.051117 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:30:36.051128 | orchestrator | 2025-09-27 21:30:36.051139 | orchestrator | 2025-09-27 21:30:36.051150 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:30:36.051161 | orchestrator | Saturday 27 September 2025 21:30:34 +0000 (0:00:02.191) 0:00:13.438 **** 2025-09-27 21:30:36.051172 | orchestrator | =============================================================================== 2025-09-27 21:30:36.051183 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.88s 2025-09-27 21:30:36.051194 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 3.58s 2025-09-27 21:30:36.051212 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.19s 2025-09-27 21:30:36.051223 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.68s 2025-09-27 21:30:36.051234 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.23s 2025-09-27 21:30:36.051246 | orchestrator | 2025-09-27 21:30:36 | INFO  | Task debefcd1-cfab-4f93-8e9a-d450959c8a06 is in state STARTED 2025-09-27 21:30:36.051257 | orchestrator | 2025-09-27 21:30:36 | INFO  | Task bb2e32ce-e5f5-4fb1-a179-fd0be46289d8 is in state SUCCESS 2025-09-27 21:30:36.051268 | orchestrator | 2025-09-27 21:30:36 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:30:36.051279 | orchestrator | 2025-09-27 21:30:36 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:30:36.051289 | orchestrator | 2025-09-27 21:30:36 | INFO  | Task 56ea0a69-85d5-4747-acd8-94c8cd350455 is in state STARTED 2025-09-27 21:30:36.051304 | orchestrator | 2025-09-27 21:30:36 | INFO  | Task 2e2516eb-ef92-468d-9077-6c42112b1b3a is in state STARTED 2025-09-27 21:30:36.051315 | orchestrator | 2025-09-27 21:30:36 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:30:36.051326 | orchestrator | 2025-09-27 21:30:36 | INFO  | Task 236ce99b-d366-4376-86a1-0d8367815ec1 is in state STARTED 2025-09-27 21:30:36.051337 | orchestrator | 2025-09-27 21:30:36 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:30:39.030398 | orchestrator | 2025-09-27 21:30:39 | INFO  | Task debefcd1-cfab-4f93-8e9a-d450959c8a06 is in state STARTED 2025-09-27 21:30:39.030475 | orchestrator | 2025-09-27 21:30:39 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:30:39.030635 | orchestrator | 2025-09-27 21:30:39 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:30:39.032610 | orchestrator | 2025-09-27 21:30:39 | INFO  | Task 56ea0a69-85d5-4747-acd8-94c8cd350455 is in state STARTED 2025-09-27 21:30:39.033908 | orchestrator | 2025-09-27 21:30:39 | INFO  | Task 2e2516eb-ef92-468d-9077-6c42112b1b3a is in state STARTED 2025-09-27 21:30:39.036681 | orchestrator | 2025-09-27 21:30:39 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:30:39.036746 | orchestrator | 2025-09-27 21:30:39 | INFO  | Task 236ce99b-d366-4376-86a1-0d8367815ec1 is in state STARTED 2025-09-27 21:30:39.036761 | orchestrator | 2025-09-27 21:30:39 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:30:42.077606 | orchestrator | 2025-09-27 21:30:42 | INFO  | Task debefcd1-cfab-4f93-8e9a-d450959c8a06 is in state STARTED 2025-09-27 21:30:42.077825 | orchestrator | 2025-09-27 21:30:42 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:30:42.078129 | orchestrator | 2025-09-27 21:30:42 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:30:42.079075 | orchestrator | 2025-09-27 21:30:42 | INFO  | Task 56ea0a69-85d5-4747-acd8-94c8cd350455 is in state STARTED 2025-09-27 21:30:42.094348 | orchestrator | 2025-09-27 21:30:42 | INFO  | Task 2e2516eb-ef92-468d-9077-6c42112b1b3a is in state STARTED 2025-09-27 21:30:42.100417 | orchestrator | 2025-09-27 21:30:42 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:30:42.100948 | orchestrator | 2025-09-27 21:30:42 | INFO  | Task 236ce99b-d366-4376-86a1-0d8367815ec1 is in state STARTED 2025-09-27 21:30:42.100972 | orchestrator | 2025-09-27 21:30:42 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:30:45.128016 | orchestrator | 2025-09-27 21:30:45 | INFO  | Task debefcd1-cfab-4f93-8e9a-d450959c8a06 is in state STARTED 2025-09-27 21:30:45.128920 | orchestrator | 2025-09-27 21:30:45 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:30:45.130940 | orchestrator | 2025-09-27 21:30:45 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:30:45.131417 | orchestrator | 2025-09-27 21:30:45 | INFO  | Task 56ea0a69-85d5-4747-acd8-94c8cd350455 is in state STARTED 2025-09-27 21:30:45.132179 | orchestrator | 2025-09-27 21:30:45 | INFO  | Task 2e2516eb-ef92-468d-9077-6c42112b1b3a is in state STARTED 2025-09-27 21:30:45.132560 | orchestrator | 2025-09-27 21:30:45 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:30:45.133446 | orchestrator | 2025-09-27 21:30:45 | INFO  | Task 236ce99b-d366-4376-86a1-0d8367815ec1 is in state STARTED 2025-09-27 21:30:45.133470 | orchestrator | 2025-09-27 21:30:45 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:30:48.188890 | orchestrator | 2025-09-27 21:30:48 | INFO  | Task debefcd1-cfab-4f93-8e9a-d450959c8a06 is in state STARTED 2025-09-27 21:30:48.188972 | orchestrator | 2025-09-27 21:30:48 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:30:48.188986 | orchestrator | 2025-09-27 21:30:48 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:30:48.189504 | orchestrator | 2025-09-27 21:30:48 | INFO  | Task 56ea0a69-85d5-4747-acd8-94c8cd350455 is in state STARTED 2025-09-27 21:30:48.191624 | orchestrator | 2025-09-27 21:30:48 | INFO  | Task 2e2516eb-ef92-468d-9077-6c42112b1b3a is in state STARTED 2025-09-27 21:30:48.192462 | orchestrator | 2025-09-27 21:30:48 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:30:48.193484 | orchestrator | 2025-09-27 21:30:48 | INFO  | Task 236ce99b-d366-4376-86a1-0d8367815ec1 is in state STARTED 2025-09-27 21:30:48.193505 | orchestrator | 2025-09-27 21:30:48 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:30:51.411954 | orchestrator | 2025-09-27 21:30:51 | INFO  | Task debefcd1-cfab-4f93-8e9a-d450959c8a06 is in state STARTED 2025-09-27 21:30:51.412064 | orchestrator | 2025-09-27 21:30:51 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:30:51.412079 | orchestrator | 2025-09-27 21:30:51 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:30:51.412091 | orchestrator | 2025-09-27 21:30:51 | INFO  | Task 56ea0a69-85d5-4747-acd8-94c8cd350455 is in state STARTED 2025-09-27 21:30:51.412102 | orchestrator | 2025-09-27 21:30:51 | INFO  | Task 2e2516eb-ef92-468d-9077-6c42112b1b3a is in state STARTED 2025-09-27 21:30:51.412113 | orchestrator | 2025-09-27 21:30:51 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:30:51.412125 | orchestrator | 2025-09-27 21:30:51 | INFO  | Task 236ce99b-d366-4376-86a1-0d8367815ec1 is in state STARTED 2025-09-27 21:30:51.412813 | orchestrator | 2025-09-27 21:30:51 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:30:54.496984 | orchestrator | 2025-09-27 21:30:54 | INFO  | Task debefcd1-cfab-4f93-8e9a-d450959c8a06 is in state STARTED 2025-09-27 21:30:54.497093 | orchestrator | 2025-09-27 21:30:54 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:30:54.497123 | orchestrator | 2025-09-27 21:30:54 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:30:54.497145 | orchestrator | 2025-09-27 21:30:54 | INFO  | Task 56ea0a69-85d5-4747-acd8-94c8cd350455 is in state STARTED 2025-09-27 21:30:54.497158 | orchestrator | 2025-09-27 21:30:54 | INFO  | Task 2e2516eb-ef92-468d-9077-6c42112b1b3a is in state STARTED 2025-09-27 21:30:54.497194 | orchestrator | 2025-09-27 21:30:54 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:30:54.497205 | orchestrator | 2025-09-27 21:30:54 | INFO  | Task 236ce99b-d366-4376-86a1-0d8367815ec1 is in state STARTED 2025-09-27 21:30:54.497233 | orchestrator | 2025-09-27 21:30:54 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:30:57.382956 | orchestrator | 2025-09-27 21:30:57 | INFO  | Task debefcd1-cfab-4f93-8e9a-d450959c8a06 is in state STARTED 2025-09-27 21:30:57.384630 | orchestrator | 2025-09-27 21:30:57 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:30:57.385427 | orchestrator | 2025-09-27 21:30:57 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:30:57.388563 | orchestrator | 2025-09-27 21:30:57 | INFO  | Task 56ea0a69-85d5-4747-acd8-94c8cd350455 is in state STARTED 2025-09-27 21:30:57.391336 | orchestrator | 2025-09-27 21:30:57 | INFO  | Task 2e2516eb-ef92-468d-9077-6c42112b1b3a is in state STARTED 2025-09-27 21:30:57.391719 | orchestrator | 2025-09-27 21:30:57 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:30:57.393605 | orchestrator | 2025-09-27 21:30:57 | INFO  | Task 236ce99b-d366-4376-86a1-0d8367815ec1 is in state STARTED 2025-09-27 21:30:57.393643 | orchestrator | 2025-09-27 21:30:57 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:31:00.454353 | orchestrator | 2025-09-27 21:31:00 | INFO  | Task debefcd1-cfab-4f93-8e9a-d450959c8a06 is in state SUCCESS 2025-09-27 21:31:00.455148 | orchestrator | 2025-09-27 21:31:00 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:31:00.456060 | orchestrator | 2025-09-27 21:31:00 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:31:00.456849 | orchestrator | 2025-09-27 21:31:00 | INFO  | Task 56ea0a69-85d5-4747-acd8-94c8cd350455 is in state STARTED 2025-09-27 21:31:00.458450 | orchestrator | 2025-09-27 21:31:00 | INFO  | Task 2e2516eb-ef92-468d-9077-6c42112b1b3a is in state STARTED 2025-09-27 21:31:00.459345 | orchestrator | 2025-09-27 21:31:00 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:31:00.459369 | orchestrator | 2025-09-27 21:31:00 | INFO  | Task 236ce99b-d366-4376-86a1-0d8367815ec1 is in state STARTED 2025-09-27 21:31:00.459382 | orchestrator | 2025-09-27 21:31:00 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:31:03.506970 | orchestrator | 2025-09-27 21:31:03 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:31:03.510108 | orchestrator | 2025-09-27 21:31:03 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:31:03.510135 | orchestrator | 2025-09-27 21:31:03 | INFO  | Task 56ea0a69-85d5-4747-acd8-94c8cd350455 is in state SUCCESS 2025-09-27 21:31:03.513544 | orchestrator | 2025-09-27 21:31:03 | INFO  | Task 2e2516eb-ef92-468d-9077-6c42112b1b3a is in state STARTED 2025-09-27 21:31:03.516804 | orchestrator | 2025-09-27 21:31:03 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:31:03.516843 | orchestrator | 2025-09-27 21:31:03 | INFO  | Task 236ce99b-d366-4376-86a1-0d8367815ec1 is in state STARTED 2025-09-27 21:31:03.516856 | orchestrator | 2025-09-27 21:31:03 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:31:06.562183 | orchestrator | 2025-09-27 21:31:06 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:31:06.562288 | orchestrator | 2025-09-27 21:31:06 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:31:06.562328 | orchestrator | 2025-09-27 21:31:06 | INFO  | Task 2e2516eb-ef92-468d-9077-6c42112b1b3a is in state STARTED 2025-09-27 21:31:06.562341 | orchestrator | 2025-09-27 21:31:06 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:31:06.563662 | orchestrator | 2025-09-27 21:31:06 | INFO  | Task 236ce99b-d366-4376-86a1-0d8367815ec1 is in state STARTED 2025-09-27 21:31:06.563927 | orchestrator | 2025-09-27 21:31:06 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:31:09.605809 | orchestrator | 2025-09-27 21:31:09 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:31:09.605867 | orchestrator | 2025-09-27 21:31:09 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:31:09.605875 | orchestrator | 2025-09-27 21:31:09 | INFO  | Task 2e2516eb-ef92-468d-9077-6c42112b1b3a is in state STARTED 2025-09-27 21:31:09.606093 | orchestrator | 2025-09-27 21:31:09 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:31:09.606525 | orchestrator | 2025-09-27 21:31:09 | INFO  | Task 236ce99b-d366-4376-86a1-0d8367815ec1 is in state STARTED 2025-09-27 21:31:09.606647 | orchestrator | 2025-09-27 21:31:09 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:31:12.639100 | orchestrator | 2025-09-27 21:31:12 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:31:12.640789 | orchestrator | 2025-09-27 21:31:12 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:31:12.641777 | orchestrator | 2025-09-27 21:31:12 | INFO  | Task 2e2516eb-ef92-468d-9077-6c42112b1b3a is in state STARTED 2025-09-27 21:31:12.642634 | orchestrator | 2025-09-27 21:31:12 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:31:12.643311 | orchestrator | 2025-09-27 21:31:12 | INFO  | Task 236ce99b-d366-4376-86a1-0d8367815ec1 is in state STARTED 2025-09-27 21:31:12.643663 | orchestrator | 2025-09-27 21:31:12 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:31:15.694652 | orchestrator | 2025-09-27 21:31:15 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:31:15.695932 | orchestrator | 2025-09-27 21:31:15 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:31:15.697584 | orchestrator | 2025-09-27 21:31:15 | INFO  | Task 2e2516eb-ef92-468d-9077-6c42112b1b3a is in state STARTED 2025-09-27 21:31:15.707224 | orchestrator | 2025-09-27 21:31:15 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:31:15.711451 | orchestrator | 2025-09-27 21:31:15 | INFO  | Task 236ce99b-d366-4376-86a1-0d8367815ec1 is in state STARTED 2025-09-27 21:31:15.711760 | orchestrator | 2025-09-27 21:31:15 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:31:18.761744 | orchestrator | 2025-09-27 21:31:18 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:31:18.761943 | orchestrator | 2025-09-27 21:31:18 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:31:18.762709 | orchestrator | 2025-09-27 21:31:18 | INFO  | Task 2e2516eb-ef92-468d-9077-6c42112b1b3a is in state STARTED 2025-09-27 21:31:18.764430 | orchestrator | 2025-09-27 21:31:18 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:31:18.766717 | orchestrator | 2025-09-27 21:31:18 | INFO  | Task 236ce99b-d366-4376-86a1-0d8367815ec1 is in state STARTED 2025-09-27 21:31:18.766742 | orchestrator | 2025-09-27 21:31:18 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:31:21.854747 | orchestrator | 2025-09-27 21:31:21 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:31:21.856333 | orchestrator | 2025-09-27 21:31:21 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:31:21.862782 | orchestrator | 2025-09-27 21:31:21 | INFO  | Task 2e2516eb-ef92-468d-9077-6c42112b1b3a is in state STARTED 2025-09-27 21:31:21.862826 | orchestrator | 2025-09-27 21:31:21 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:31:21.862839 | orchestrator | 2025-09-27 21:31:21 | INFO  | Task 236ce99b-d366-4376-86a1-0d8367815ec1 is in state STARTED 2025-09-27 21:31:21.862850 | orchestrator | 2025-09-27 21:31:21 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:31:24.897739 | orchestrator | 2025-09-27 21:31:24 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:31:24.898342 | orchestrator | 2025-09-27 21:31:24 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:31:24.899032 | orchestrator | 2025-09-27 21:31:24 | INFO  | Task 2e2516eb-ef92-468d-9077-6c42112b1b3a is in state SUCCESS 2025-09-27 21:31:24.900998 | orchestrator | 2025-09-27 21:31:24.901037 | orchestrator | 2025-09-27 21:31:24.901049 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-09-27 21:31:24.901061 | orchestrator | 2025-09-27 21:31:24.901074 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-09-27 21:31:24.901086 | orchestrator | Saturday 27 September 2025 21:30:21 +0000 (0:00:00.297) 0:00:00.297 **** 2025-09-27 21:31:24.901103 | orchestrator | ok: [testbed-manager] => { 2025-09-27 21:31:24.901116 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-09-27 21:31:24.901129 | orchestrator | } 2025-09-27 21:31:24.901297 | orchestrator | 2025-09-27 21:31:24.901431 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-09-27 21:31:24.901444 | orchestrator | Saturday 27 September 2025 21:30:21 +0000 (0:00:00.421) 0:00:00.718 **** 2025-09-27 21:31:24.901456 | orchestrator | ok: [testbed-manager] 2025-09-27 21:31:24.901469 | orchestrator | 2025-09-27 21:31:24.901480 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-09-27 21:31:24.901492 | orchestrator | Saturday 27 September 2025 21:30:24 +0000 (0:00:02.518) 0:00:03.236 **** 2025-09-27 21:31:24.901504 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-09-27 21:31:24.901515 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-09-27 21:31:24.901527 | orchestrator | 2025-09-27 21:31:24.901538 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-09-27 21:31:24.901550 | orchestrator | Saturday 27 September 2025 21:30:25 +0000 (0:00:00.964) 0:00:04.201 **** 2025-09-27 21:31:24.901561 | orchestrator | changed: [testbed-manager] 2025-09-27 21:31:24.901573 | orchestrator | 2025-09-27 21:31:24.901584 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-09-27 21:31:24.901596 | orchestrator | Saturday 27 September 2025 21:30:26 +0000 (0:00:01.922) 0:00:06.124 **** 2025-09-27 21:31:24.901607 | orchestrator | changed: [testbed-manager] 2025-09-27 21:31:24.901618 | orchestrator | 2025-09-27 21:31:24.901629 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-09-27 21:31:24.901640 | orchestrator | Saturday 27 September 2025 21:30:29 +0000 (0:00:02.747) 0:00:08.871 **** 2025-09-27 21:31:24.901651 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-09-27 21:31:24.901662 | orchestrator | ok: [testbed-manager] 2025-09-27 21:31:24.901694 | orchestrator | 2025-09-27 21:31:24.901705 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-09-27 21:31:24.901716 | orchestrator | Saturday 27 September 2025 21:30:55 +0000 (0:00:26.166) 0:00:35.038 **** 2025-09-27 21:31:24.901727 | orchestrator | changed: [testbed-manager] 2025-09-27 21:31:24.901758 | orchestrator | 2025-09-27 21:31:24.901769 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:31:24.901781 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:31:24.901793 | orchestrator | 2025-09-27 21:31:24.901804 | orchestrator | 2025-09-27 21:31:24.901814 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:31:24.901825 | orchestrator | Saturday 27 September 2025 21:30:59 +0000 (0:00:03.172) 0:00:38.210 **** 2025-09-27 21:31:24.901836 | orchestrator | =============================================================================== 2025-09-27 21:31:24.901847 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.17s 2025-09-27 21:31:24.901858 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.17s 2025-09-27 21:31:24.901868 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.75s 2025-09-27 21:31:24.901879 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.52s 2025-09-27 21:31:24.901890 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 1.92s 2025-09-27 21:31:24.901901 | orchestrator | osism.services.homer : Create required directories ---------------------- 0.96s 2025-09-27 21:31:24.901912 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.42s 2025-09-27 21:31:24.901922 | orchestrator | 2025-09-27 21:31:24.901933 | orchestrator | 2025-09-27 21:31:24.901944 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-09-27 21:31:24.901955 | orchestrator | 2025-09-27 21:31:24.901966 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-09-27 21:31:24.901976 | orchestrator | Saturday 27 September 2025 21:30:19 +0000 (0:00:00.220) 0:00:00.220 **** 2025-09-27 21:31:24.901988 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-09-27 21:31:24.902000 | orchestrator | 2025-09-27 21:31:24.902011 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-09-27 21:31:24.902090 | orchestrator | Saturday 27 September 2025 21:30:20 +0000 (0:00:00.668) 0:00:00.888 **** 2025-09-27 21:31:24.902103 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-09-27 21:31:24.902116 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-09-27 21:31:24.902128 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-09-27 21:31:24.902140 | orchestrator | 2025-09-27 21:31:24.902152 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-09-27 21:31:24.902164 | orchestrator | Saturday 27 September 2025 21:30:21 +0000 (0:00:01.725) 0:00:02.614 **** 2025-09-27 21:31:24.902175 | orchestrator | changed: [testbed-manager] 2025-09-27 21:31:24.902187 | orchestrator | 2025-09-27 21:31:24.902199 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-09-27 21:31:24.902211 | orchestrator | Saturday 27 September 2025 21:30:24 +0000 (0:00:02.928) 0:00:05.542 **** 2025-09-27 21:31:24.902237 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-09-27 21:31:24.902250 | orchestrator | ok: [testbed-manager] 2025-09-27 21:31:24.902262 | orchestrator | 2025-09-27 21:31:24.902274 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-09-27 21:31:24.902286 | orchestrator | Saturday 27 September 2025 21:30:55 +0000 (0:00:30.178) 0:00:35.721 **** 2025-09-27 21:31:24.902298 | orchestrator | changed: [testbed-manager] 2025-09-27 21:31:24.902310 | orchestrator | 2025-09-27 21:31:24.902328 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-09-27 21:31:24.902340 | orchestrator | Saturday 27 September 2025 21:30:56 +0000 (0:00:00.909) 0:00:36.631 **** 2025-09-27 21:31:24.902352 | orchestrator | ok: [testbed-manager] 2025-09-27 21:31:24.902373 | orchestrator | 2025-09-27 21:31:24.902385 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-09-27 21:31:24.902398 | orchestrator | Saturday 27 September 2025 21:30:56 +0000 (0:00:00.923) 0:00:37.554 **** 2025-09-27 21:31:24.902409 | orchestrator | changed: [testbed-manager] 2025-09-27 21:31:24.902420 | orchestrator | 2025-09-27 21:31:24.902430 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-09-27 21:31:24.902441 | orchestrator | Saturday 27 September 2025 21:31:00 +0000 (0:00:03.452) 0:00:41.006 **** 2025-09-27 21:31:24.902452 | orchestrator | changed: [testbed-manager] 2025-09-27 21:31:24.902463 | orchestrator | 2025-09-27 21:31:24.902473 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-09-27 21:31:24.902484 | orchestrator | Saturday 27 September 2025 21:31:01 +0000 (0:00:01.159) 0:00:42.165 **** 2025-09-27 21:31:24.902495 | orchestrator | changed: [testbed-manager] 2025-09-27 21:31:24.902506 | orchestrator | 2025-09-27 21:31:24.902517 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-09-27 21:31:24.902527 | orchestrator | Saturday 27 September 2025 21:31:02 +0000 (0:00:00.686) 0:00:42.852 **** 2025-09-27 21:31:24.902538 | orchestrator | ok: [testbed-manager] 2025-09-27 21:31:24.902549 | orchestrator | 2025-09-27 21:31:24.902560 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:31:24.902571 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:31:24.902581 | orchestrator | 2025-09-27 21:31:24.902592 | orchestrator | 2025-09-27 21:31:24.902603 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:31:24.902614 | orchestrator | Saturday 27 September 2025 21:31:02 +0000 (0:00:00.437) 0:00:43.289 **** 2025-09-27 21:31:24.902625 | orchestrator | =============================================================================== 2025-09-27 21:31:24.902635 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 30.18s 2025-09-27 21:31:24.902646 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 3.45s 2025-09-27 21:31:24.902657 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.93s 2025-09-27 21:31:24.902704 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.73s 2025-09-27 21:31:24.902717 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.16s 2025-09-27 21:31:24.902728 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.92s 2025-09-27 21:31:24.902739 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.91s 2025-09-27 21:31:24.902750 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.69s 2025-09-27 21:31:24.902760 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.67s 2025-09-27 21:31:24.902771 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.44s 2025-09-27 21:31:24.902782 | orchestrator | 2025-09-27 21:31:24.902792 | orchestrator | 2025-09-27 21:31:24.902803 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 21:31:24.902814 | orchestrator | 2025-09-27 21:31:24.902825 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 21:31:24.902836 | orchestrator | Saturday 27 September 2025 21:30:20 +0000 (0:00:00.503) 0:00:00.503 **** 2025-09-27 21:31:24.902846 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-09-27 21:31:24.902857 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-09-27 21:31:24.902868 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-09-27 21:31:24.902878 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-09-27 21:31:24.902889 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-09-27 21:31:24.902900 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-09-27 21:31:24.902910 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-09-27 21:31:24.902927 | orchestrator | 2025-09-27 21:31:24.902938 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-09-27 21:31:24.902949 | orchestrator | 2025-09-27 21:31:24.902960 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-09-27 21:31:24.902970 | orchestrator | Saturday 27 September 2025 21:30:22 +0000 (0:00:01.962) 0:00:02.480 **** 2025-09-27 21:31:24.902994 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:31:24.903013 | orchestrator | 2025-09-27 21:31:24.903024 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-09-27 21:31:24.903035 | orchestrator | Saturday 27 September 2025 21:30:24 +0000 (0:00:02.274) 0:00:04.754 **** 2025-09-27 21:31:24.903046 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:31:24.903057 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:31:24.903068 | orchestrator | ok: [testbed-manager] 2025-09-27 21:31:24.903079 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:31:24.903089 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:31:24.903106 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:31:24.903118 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:31:24.903128 | orchestrator | 2025-09-27 21:31:24.903139 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-09-27 21:31:24.903150 | orchestrator | Saturday 27 September 2025 21:30:26 +0000 (0:00:01.719) 0:00:06.474 **** 2025-09-27 21:31:24.903161 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:31:24.903177 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:31:24.903188 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:31:24.903199 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:31:24.903209 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:31:24.903220 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:31:24.903231 | orchestrator | ok: [testbed-manager] 2025-09-27 21:31:24.903241 | orchestrator | 2025-09-27 21:31:24.903252 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-09-27 21:31:24.903263 | orchestrator | Saturday 27 September 2025 21:30:29 +0000 (0:00:02.786) 0:00:09.260 **** 2025-09-27 21:31:24.903274 | orchestrator | changed: [testbed-manager] 2025-09-27 21:31:24.903285 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:31:24.903295 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:31:24.903306 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:31:24.903317 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:31:24.903328 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:31:24.903338 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:31:24.903349 | orchestrator | 2025-09-27 21:31:24.903360 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-09-27 21:31:24.903371 | orchestrator | Saturday 27 September 2025 21:30:31 +0000 (0:00:02.477) 0:00:11.737 **** 2025-09-27 21:31:24.903382 | orchestrator | changed: [testbed-manager] 2025-09-27 21:31:24.903392 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:31:24.903403 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:31:24.903413 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:31:24.903424 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:31:24.903434 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:31:24.903445 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:31:24.903456 | orchestrator | 2025-09-27 21:31:24.903467 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-09-27 21:31:24.903478 | orchestrator | Saturday 27 September 2025 21:30:42 +0000 (0:00:10.843) 0:00:22.581 **** 2025-09-27 21:31:24.903488 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:31:24.903499 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:31:24.903510 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:31:24.903520 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:31:24.903531 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:31:24.903547 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:31:24.903558 | orchestrator | changed: [testbed-manager] 2025-09-27 21:31:24.903569 | orchestrator | 2025-09-27 21:31:24.903580 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-09-27 21:31:24.903590 | orchestrator | Saturday 27 September 2025 21:31:03 +0000 (0:00:21.053) 0:00:43.634 **** 2025-09-27 21:31:24.903602 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:31:24.903614 | orchestrator | 2025-09-27 21:31:24.903625 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-09-27 21:31:24.903636 | orchestrator | Saturday 27 September 2025 21:31:04 +0000 (0:00:01.055) 0:00:44.690 **** 2025-09-27 21:31:24.903646 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-09-27 21:31:24.903657 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-09-27 21:31:24.903715 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-09-27 21:31:24.903728 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-09-27 21:31:24.903739 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-09-27 21:31:24.903750 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-09-27 21:31:24.903761 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-09-27 21:31:24.903772 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-09-27 21:31:24.903782 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-09-27 21:31:24.903793 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-09-27 21:31:24.903803 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-09-27 21:31:24.903814 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-09-27 21:31:24.903825 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-09-27 21:31:24.903835 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-09-27 21:31:24.903846 | orchestrator | 2025-09-27 21:31:24.903857 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-09-27 21:31:24.903868 | orchestrator | Saturday 27 September 2025 21:31:10 +0000 (0:00:05.464) 0:00:50.155 **** 2025-09-27 21:31:24.903879 | orchestrator | ok: [testbed-manager] 2025-09-27 21:31:24.903890 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:31:24.903900 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:31:24.903911 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:31:24.903922 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:31:24.903932 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:31:24.903943 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:31:24.903953 | orchestrator | 2025-09-27 21:31:24.903964 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-09-27 21:31:24.903975 | orchestrator | Saturday 27 September 2025 21:31:11 +0000 (0:00:01.263) 0:00:51.419 **** 2025-09-27 21:31:24.903986 | orchestrator | changed: [testbed-manager] 2025-09-27 21:31:24.903997 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:31:24.904007 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:31:24.904018 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:31:24.904028 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:31:24.904037 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:31:24.904046 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:31:24.904056 | orchestrator | 2025-09-27 21:31:24.904065 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-09-27 21:31:24.904081 | orchestrator | Saturday 27 September 2025 21:31:12 +0000 (0:00:01.198) 0:00:52.617 **** 2025-09-27 21:31:24.904092 | orchestrator | ok: [testbed-manager] 2025-09-27 21:31:24.904101 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:31:24.904111 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:31:24.904120 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:31:24.904129 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:31:24.904145 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:31:24.904154 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:31:24.904164 | orchestrator | 2025-09-27 21:31:24.904178 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-09-27 21:31:24.904188 | orchestrator | Saturday 27 September 2025 21:31:14 +0000 (0:00:01.273) 0:00:53.891 **** 2025-09-27 21:31:24.904197 | orchestrator | ok: [testbed-manager] 2025-09-27 21:31:24.904207 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:31:24.904216 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:31:24.904225 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:31:24.904235 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:31:24.904244 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:31:24.904253 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:31:24.904263 | orchestrator | 2025-09-27 21:31:24.904272 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-09-27 21:31:24.904282 | orchestrator | Saturday 27 September 2025 21:31:15 +0000 (0:00:01.745) 0:00:55.636 **** 2025-09-27 21:31:24.904292 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-09-27 21:31:24.904302 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:31:24.904312 | orchestrator | 2025-09-27 21:31:24.904322 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-09-27 21:31:24.904331 | orchestrator | Saturday 27 September 2025 21:31:16 +0000 (0:00:01.183) 0:00:56.820 **** 2025-09-27 21:31:24.904341 | orchestrator | changed: [testbed-manager] 2025-09-27 21:31:24.904350 | orchestrator | 2025-09-27 21:31:24.904360 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-09-27 21:31:24.904369 | orchestrator | Saturday 27 September 2025 21:31:19 +0000 (0:00:02.593) 0:00:59.413 **** 2025-09-27 21:31:24.904379 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:31:24.904389 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:31:24.904398 | orchestrator | changed: [testbed-manager] 2025-09-27 21:31:24.904408 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:31:24.904417 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:31:24.904427 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:31:24.904436 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:31:24.904446 | orchestrator | 2025-09-27 21:31:24.904455 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:31:24.904465 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:31:24.904475 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:31:24.904485 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:31:24.904494 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:31:24.904504 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:31:24.904514 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:31:24.904523 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:31:24.904533 | orchestrator | 2025-09-27 21:31:24.904542 | orchestrator | 2025-09-27 21:31:24.904552 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:31:24.904561 | orchestrator | Saturday 27 September 2025 21:31:22 +0000 (0:00:02.714) 0:01:02.128 **** 2025-09-27 21:31:24.904576 | orchestrator | =============================================================================== 2025-09-27 21:31:24.904585 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 21.05s 2025-09-27 21:31:24.904595 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.84s 2025-09-27 21:31:24.904604 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.46s 2025-09-27 21:31:24.904614 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.79s 2025-09-27 21:31:24.904623 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 2.71s 2025-09-27 21:31:24.904632 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.59s 2025-09-27 21:31:24.904642 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.48s 2025-09-27 21:31:24.904651 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.27s 2025-09-27 21:31:24.904661 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.98s 2025-09-27 21:31:24.904686 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.75s 2025-09-27 21:31:24.904696 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.72s 2025-09-27 21:31:24.904710 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.27s 2025-09-27 21:31:24.904721 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.26s 2025-09-27 21:31:24.904730 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.20s 2025-09-27 21:31:24.904740 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.18s 2025-09-27 21:31:24.904750 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.06s 2025-09-27 21:31:24.904759 | orchestrator | 2025-09-27 21:31:24 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:31:24.904769 | orchestrator | 2025-09-27 21:31:24 | INFO  | Task 236ce99b-d366-4376-86a1-0d8367815ec1 is in state STARTED 2025-09-27 21:31:24.904779 | orchestrator | 2025-09-27 21:31:24 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:31:27.932570 | orchestrator | 2025-09-27 21:31:27 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:31:27.934647 | orchestrator | 2025-09-27 21:31:27 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:31:27.936787 | orchestrator | 2025-09-27 21:31:27 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:31:27.938131 | orchestrator | 2025-09-27 21:31:27 | INFO  | Task 236ce99b-d366-4376-86a1-0d8367815ec1 is in state STARTED 2025-09-27 21:31:27.938384 | orchestrator | 2025-09-27 21:31:27 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:31:30.981695 | orchestrator | 2025-09-27 21:31:30 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:31:30.983997 | orchestrator | 2025-09-27 21:31:30 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:31:30.985420 | orchestrator | 2025-09-27 21:31:30 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:31:30.989000 | orchestrator | 2025-09-27 21:31:30 | INFO  | Task 236ce99b-d366-4376-86a1-0d8367815ec1 is in state STARTED 2025-09-27 21:31:30.989025 | orchestrator | 2025-09-27 21:31:30 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:31:34.048348 | orchestrator | 2025-09-27 21:31:34 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:31:34.048452 | orchestrator | 2025-09-27 21:31:34 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:31:34.048497 | orchestrator | 2025-09-27 21:31:34 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:31:34.048510 | orchestrator | 2025-09-27 21:31:34 | INFO  | Task 236ce99b-d366-4376-86a1-0d8367815ec1 is in state SUCCESS 2025-09-27 21:31:34.048521 | orchestrator | 2025-09-27 21:31:34 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:31:37.095759 | orchestrator | 2025-09-27 21:31:37 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:31:37.097321 | orchestrator | 2025-09-27 21:31:37 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:31:37.099723 | orchestrator | 2025-09-27 21:31:37 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:31:37.100475 | orchestrator | 2025-09-27 21:31:37 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:31:40.144257 | orchestrator | 2025-09-27 21:31:40 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:31:40.145452 | orchestrator | 2025-09-27 21:31:40 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:31:40.148919 | orchestrator | 2025-09-27 21:31:40 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:31:40.148950 | orchestrator | 2025-09-27 21:31:40 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:31:43.201602 | orchestrator | 2025-09-27 21:31:43 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:31:43.201759 | orchestrator | 2025-09-27 21:31:43 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:31:43.201777 | orchestrator | 2025-09-27 21:31:43 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:31:43.201789 | orchestrator | 2025-09-27 21:31:43 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:31:46.241573 | orchestrator | 2025-09-27 21:31:46 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:31:46.241882 | orchestrator | 2025-09-27 21:31:46 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:31:46.242166 | orchestrator | 2025-09-27 21:31:46 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:31:46.242319 | orchestrator | 2025-09-27 21:31:46 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:31:49.273533 | orchestrator | 2025-09-27 21:31:49 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:31:49.274815 | orchestrator | 2025-09-27 21:31:49 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:31:49.276327 | orchestrator | 2025-09-27 21:31:49 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:31:49.276359 | orchestrator | 2025-09-27 21:31:49 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:31:52.317997 | orchestrator | 2025-09-27 21:31:52 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:31:52.318120 | orchestrator | 2025-09-27 21:31:52 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:31:52.318332 | orchestrator | 2025-09-27 21:31:52 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:31:52.318355 | orchestrator | 2025-09-27 21:31:52 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:31:55.359920 | orchestrator | 2025-09-27 21:31:55 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:31:55.360322 | orchestrator | 2025-09-27 21:31:55 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:31:55.361878 | orchestrator | 2025-09-27 21:31:55 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:31:55.361923 | orchestrator | 2025-09-27 21:31:55 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:31:58.397164 | orchestrator | 2025-09-27 21:31:58 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:31:58.398219 | orchestrator | 2025-09-27 21:31:58 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:31:58.399688 | orchestrator | 2025-09-27 21:31:58 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:31:58.399770 | orchestrator | 2025-09-27 21:31:58 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:32:01.443887 | orchestrator | 2025-09-27 21:32:01 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:32:01.444850 | orchestrator | 2025-09-27 21:32:01 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:32:01.446876 | orchestrator | 2025-09-27 21:32:01 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:32:01.446922 | orchestrator | 2025-09-27 21:32:01 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:32:04.492593 | orchestrator | 2025-09-27 21:32:04 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:32:04.493323 | orchestrator | 2025-09-27 21:32:04 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:32:04.496281 | orchestrator | 2025-09-27 21:32:04 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:32:04.496671 | orchestrator | 2025-09-27 21:32:04 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:32:07.684831 | orchestrator | 2025-09-27 21:32:07 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:32:07.688042 | orchestrator | 2025-09-27 21:32:07 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:32:07.688073 | orchestrator | 2025-09-27 21:32:07 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:32:07.688085 | orchestrator | 2025-09-27 21:32:07 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:32:10.735074 | orchestrator | 2025-09-27 21:32:10 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:32:10.735720 | orchestrator | 2025-09-27 21:32:10 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:32:10.737146 | orchestrator | 2025-09-27 21:32:10 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:32:10.737417 | orchestrator | 2025-09-27 21:32:10 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:32:13.783383 | orchestrator | 2025-09-27 21:32:13 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:32:13.785910 | orchestrator | 2025-09-27 21:32:13 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:32:13.788075 | orchestrator | 2025-09-27 21:32:13 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:32:13.788102 | orchestrator | 2025-09-27 21:32:13 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:32:16.833265 | orchestrator | 2025-09-27 21:32:16 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:32:16.834001 | orchestrator | 2025-09-27 21:32:16 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:32:16.836021 | orchestrator | 2025-09-27 21:32:16 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:32:16.836051 | orchestrator | 2025-09-27 21:32:16 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:32:19.880141 | orchestrator | 2025-09-27 21:32:19 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:32:19.880242 | orchestrator | 2025-09-27 21:32:19 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:32:19.880903 | orchestrator | 2025-09-27 21:32:19 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:32:19.880941 | orchestrator | 2025-09-27 21:32:19 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:32:22.920251 | orchestrator | 2025-09-27 21:32:22 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:32:22.921170 | orchestrator | 2025-09-27 21:32:22 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state STARTED 2025-09-27 21:32:22.922411 | orchestrator | 2025-09-27 21:32:22 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:32:22.922432 | orchestrator | 2025-09-27 21:32:22 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:32:25.968276 | orchestrator | 2025-09-27 21:32:25 | INFO  | Task e0b6c16f-4967-4f38-8999-9c470835ea05 is in state STARTED 2025-09-27 21:32:25.968609 | orchestrator | 2025-09-27 21:32:25 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:32:25.970488 | orchestrator | 2025-09-27 21:32:25 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:32:25.973755 | orchestrator | 2025-09-27 21:32:25 | INFO  | Task 6bff2521-fbfa-4132-beb2-5284f2d441c8 is in state SUCCESS 2025-09-27 21:32:25.978982 | orchestrator | 2025-09-27 21:32:25.979029 | orchestrator | 2025-09-27 21:32:25.979043 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-09-27 21:32:25.979055 | orchestrator | 2025-09-27 21:32:25.979066 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-09-27 21:32:25.979078 | orchestrator | Saturday 27 September 2025 21:30:38 +0000 (0:00:00.247) 0:00:00.247 **** 2025-09-27 21:32:25.979089 | orchestrator | ok: [testbed-manager] 2025-09-27 21:32:25.979102 | orchestrator | 2025-09-27 21:32:25.979113 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-09-27 21:32:25.979123 | orchestrator | Saturday 27 September 2025 21:30:39 +0000 (0:00:00.639) 0:00:00.886 **** 2025-09-27 21:32:25.979133 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-09-27 21:32:25.979143 | orchestrator | 2025-09-27 21:32:25.979153 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-09-27 21:32:25.979164 | orchestrator | Saturday 27 September 2025 21:30:39 +0000 (0:00:00.541) 0:00:01.428 **** 2025-09-27 21:32:25.979174 | orchestrator | changed: [testbed-manager] 2025-09-27 21:32:25.979184 | orchestrator | 2025-09-27 21:32:25.979194 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-09-27 21:32:25.979204 | orchestrator | Saturday 27 September 2025 21:30:40 +0000 (0:00:00.975) 0:00:02.404 **** 2025-09-27 21:32:25.979213 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-09-27 21:32:25.979223 | orchestrator | ok: [testbed-manager] 2025-09-27 21:32:25.979233 | orchestrator | 2025-09-27 21:32:25.979243 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-09-27 21:32:25.979252 | orchestrator | Saturday 27 September 2025 21:31:30 +0000 (0:00:49.330) 0:00:51.734 **** 2025-09-27 21:32:25.979262 | orchestrator | changed: [testbed-manager] 2025-09-27 21:32:25.979272 | orchestrator | 2025-09-27 21:32:25.979282 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:32:25.979292 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:32:25.979323 | orchestrator | 2025-09-27 21:32:25.979333 | orchestrator | 2025-09-27 21:32:25.979343 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:32:25.979352 | orchestrator | Saturday 27 September 2025 21:31:33 +0000 (0:00:03.415) 0:00:55.150 **** 2025-09-27 21:32:25.979362 | orchestrator | =============================================================================== 2025-09-27 21:32:25.979372 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 49.33s 2025-09-27 21:32:25.979381 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.42s 2025-09-27 21:32:25.979391 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 0.98s 2025-09-27 21:32:25.979401 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.64s 2025-09-27 21:32:25.979410 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.54s 2025-09-27 21:32:25.979420 | orchestrator | 2025-09-27 21:32:25.979429 | orchestrator | 2025-09-27 21:32:25.979439 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-09-27 21:32:25.979449 | orchestrator | 2025-09-27 21:32:25.979458 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-27 21:32:25.979468 | orchestrator | Saturday 27 September 2025 21:30:13 +0000 (0:00:00.318) 0:00:00.318 **** 2025-09-27 21:32:25.979484 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:32:25.979495 | orchestrator | 2025-09-27 21:32:25.979506 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-09-27 21:32:25.979516 | orchestrator | Saturday 27 September 2025 21:30:14 +0000 (0:00:01.173) 0:00:01.492 **** 2025-09-27 21:32:25.979525 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-27 21:32:25.979535 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-27 21:32:25.979545 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-27 21:32:25.979555 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-27 21:32:25.979564 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-27 21:32:25.979574 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-27 21:32:25.979583 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-27 21:32:25.979593 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-27 21:32:25.979603 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-27 21:32:25.979612 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-27 21:32:25.979643 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-27 21:32:25.979654 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-27 21:32:25.979664 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-27 21:32:25.979674 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-27 21:32:25.979684 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-27 21:32:25.979694 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-27 21:32:25.979741 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-27 21:32:25.979753 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-27 21:32:25.979763 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-27 21:32:25.979780 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-27 21:32:25.979790 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-27 21:32:25.979800 | orchestrator | 2025-09-27 21:32:25.979810 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-27 21:32:25.979819 | orchestrator | Saturday 27 September 2025 21:30:18 +0000 (0:00:03.869) 0:00:05.361 **** 2025-09-27 21:32:25.979829 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:32:25.979840 | orchestrator | 2025-09-27 21:32:25.979850 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-09-27 21:32:25.979860 | orchestrator | Saturday 27 September 2025 21:30:19 +0000 (0:00:01.097) 0:00:06.459 **** 2025-09-27 21:32:25.979965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 21:32:25.979984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 21:32:25.980001 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 21:32:25.980012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 21:32:25.980022 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 21:32:25.980032 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 21:32:25.980077 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 21:32:25.980090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.980100 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.980111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.980125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.980135 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.980170 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.980189 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.980201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.980227 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.980237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.980247 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.980261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.980271 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.980281 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.980296 | orchestrator | 2025-09-27 21:32:25.980306 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-09-27 21:32:25.980316 | orchestrator | Saturday 27 September 2025 21:30:24 +0000 (0:00:04.571) 0:00:11.030 **** 2025-09-27 21:32:25.980353 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-27 21:32:25.980366 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:32:25.980377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-27 21:32:25.980388 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:32:25.980398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:32:25.980414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:32:25.980424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-27 21:32:25.980441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:32:25.980457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:32:25.980468 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:32:25.980477 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:32:25.980487 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-27 21:32:25.980498 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:32:25.980508 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:32:25.980517 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:32:25.980532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-27 21:32:25.980542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:32:25.980558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:32:25.980568 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-27 21:32:25.980586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:32:25.980596 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:32:25.980606 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:32:25.980618 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:32:25.980650 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:32:25.980661 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-27 21:32:25.980673 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:32:25.980687 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:32:25.980709 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:32:25.980719 | orchestrator | 2025-09-27 21:32:25.980730 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-09-27 21:32:25.980742 | orchestrator | Saturday 27 September 2025 21:30:26 +0000 (0:00:02.003) 0:00:13.034 **** 2025-09-27 21:32:25.980752 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-27 21:32:25.980764 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:32:25.980782 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:32:25.980794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-27 21:32:25.980805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:32:25.980816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:32:25.980827 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:32:25.980842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-27 21:32:25.980859 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:32:25.980870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:32:25.980882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:32:25.980893 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:32:25.980904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-27 21:32:25.980922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:32:25.980934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:32:25.980945 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:32:25.980956 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-27 21:32:25.980967 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:32:25.980988 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:32:25.980998 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-27 21:32:25.981008 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:32:25.981018 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:32:25.981034 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:32:25.981044 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:32:25.981054 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-27 21:32:25.981064 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:32:25.981074 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:32:25.981089 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:32:25.981099 | orchestrator | 2025-09-27 21:32:25.981109 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-09-27 21:32:25.981118 | orchestrator | Saturday 27 September 2025 21:30:28 +0000 (0:00:02.308) 0:00:15.343 **** 2025-09-27 21:32:25.981128 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:32:25.981138 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:32:25.981147 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:32:25.981157 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:32:25.981166 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:32:25.981176 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:32:25.981185 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:32:25.981194 | orchestrator | 2025-09-27 21:32:25.981204 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-09-27 21:32:25.981214 | orchestrator | Saturday 27 September 2025 21:30:29 +0000 (0:00:01.084) 0:00:16.427 **** 2025-09-27 21:32:25.981223 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:32:25.981233 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:32:25.981246 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:32:25.981256 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:32:25.981266 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:32:25.981275 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:32:25.981285 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:32:25.981294 | orchestrator | 2025-09-27 21:32:25.981304 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-09-27 21:32:25.981313 | orchestrator | Saturday 27 September 2025 21:30:30 +0000 (0:00:01.096) 0:00:17.524 **** 2025-09-27 21:32:25.981323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 21:32:25.981333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 21:32:25.981352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 21:32:25.981363 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 21:32:25.981373 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 21:32:25.981390 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 21:32:25.981400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.981414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.981424 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 21:32:25.981434 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.981462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.981481 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.981501 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.981511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.981521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.981532 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.981542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.981563 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.981574 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.981594 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.981605 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.981615 | orchestrator | 2025-09-27 21:32:25.981651 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-09-27 21:32:25.981662 | orchestrator | Saturday 27 September 2025 21:30:36 +0000 (0:00:06.225) 0:00:23.749 **** 2025-09-27 21:32:25.981672 | orchestrator | [WARNING]: Skipped 2025-09-27 21:32:25.981682 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-09-27 21:32:25.981691 | orchestrator | to this access issue: 2025-09-27 21:32:25.981701 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-09-27 21:32:25.981710 | orchestrator | directory 2025-09-27 21:32:25.981720 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-27 21:32:25.981730 | orchestrator | 2025-09-27 21:32:25.981739 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-09-27 21:32:25.981749 | orchestrator | Saturday 27 September 2025 21:30:37 +0000 (0:00:00.971) 0:00:24.721 **** 2025-09-27 21:32:25.981758 | orchestrator | [WARNING]: Skipped 2025-09-27 21:32:25.981768 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-09-27 21:32:25.981777 | orchestrator | to this access issue: 2025-09-27 21:32:25.981787 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-09-27 21:32:25.981797 | orchestrator | directory 2025-09-27 21:32:25.981806 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-27 21:32:25.981816 | orchestrator | 2025-09-27 21:32:25.981825 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-09-27 21:32:25.981839 | orchestrator | Saturday 27 September 2025 21:30:39 +0000 (0:00:01.350) 0:00:26.071 **** 2025-09-27 21:32:25.981849 | orchestrator | [WARNING]: Skipped 2025-09-27 21:32:25.981858 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-09-27 21:32:25.981868 | orchestrator | to this access issue: 2025-09-27 21:32:25.981877 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-09-27 21:32:25.981887 | orchestrator | directory 2025-09-27 21:32:25.981896 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-27 21:32:25.981906 | orchestrator | 2025-09-27 21:32:25.981915 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-09-27 21:32:25.981925 | orchestrator | Saturday 27 September 2025 21:30:40 +0000 (0:00:00.794) 0:00:26.866 **** 2025-09-27 21:32:25.981934 | orchestrator | [WARNING]: Skipped 2025-09-27 21:32:25.981944 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-09-27 21:32:25.981953 | orchestrator | to this access issue: 2025-09-27 21:32:25.981963 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-09-27 21:32:25.981972 | orchestrator | directory 2025-09-27 21:32:25.981981 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-27 21:32:25.981991 | orchestrator | 2025-09-27 21:32:25.982000 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-09-27 21:32:25.982010 | orchestrator | Saturday 27 September 2025 21:30:40 +0000 (0:00:00.762) 0:00:27.629 **** 2025-09-27 21:32:25.982058 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:32:25.982068 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:32:25.982078 | orchestrator | changed: [testbed-manager] 2025-09-27 21:32:25.982087 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:32:25.982097 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:32:25.982106 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:32:25.982115 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:32:25.982125 | orchestrator | 2025-09-27 21:32:25.982134 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-09-27 21:32:25.982144 | orchestrator | Saturday 27 September 2025 21:30:44 +0000 (0:00:03.260) 0:00:30.889 **** 2025-09-27 21:32:25.982154 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-27 21:32:25.982164 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-27 21:32:25.982173 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-27 21:32:25.982189 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-27 21:32:25.982199 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-27 21:32:25.982209 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-27 21:32:25.982218 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-27 21:32:25.982228 | orchestrator | 2025-09-27 21:32:25.982237 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-09-27 21:32:25.982247 | orchestrator | Saturday 27 September 2025 21:30:46 +0000 (0:00:02.189) 0:00:33.078 **** 2025-09-27 21:32:25.982257 | orchestrator | changed: [testbed-manager] 2025-09-27 21:32:25.982266 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:32:25.982276 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:32:25.982285 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:32:25.982295 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:32:25.982304 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:32:25.982314 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:32:25.982323 | orchestrator | 2025-09-27 21:32:25.982333 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-09-27 21:32:25.982343 | orchestrator | Saturday 27 September 2025 21:30:48 +0000 (0:00:02.076) 0:00:35.155 **** 2025-09-27 21:32:25.982353 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 21:32:25.982363 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:32:25.982378 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.982401 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 21:32:25.982411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:32:25.982427 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 21:32:25.982438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:32:25.982448 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 21:32:25.982458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:32:25.982468 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 21:32:25.982488 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 21:32:25.982499 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:32:25.982513 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:32:25.982524 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.982534 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.982544 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 21:32:25.982554 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:32:25.982575 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.982585 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.982595 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.982605 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.982615 | orchestrator | 2025-09-27 21:32:25.982640 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-09-27 21:32:25.982650 | orchestrator | Saturday 27 September 2025 21:30:50 +0000 (0:00:01.952) 0:00:37.107 **** 2025-09-27 21:32:25.982660 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-27 21:32:25.982670 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-27 21:32:25.982679 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-27 21:32:25.982697 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-27 21:32:25.982707 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-27 21:32:25.982717 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-27 21:32:25.982727 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-27 21:32:25.982736 | orchestrator | 2025-09-27 21:32:25.982746 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-09-27 21:32:25.982755 | orchestrator | Saturday 27 September 2025 21:30:53 +0000 (0:00:02.992) 0:00:40.099 **** 2025-09-27 21:32:25.982765 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-27 21:32:25.982774 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-27 21:32:25.982784 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-27 21:32:25.982793 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-27 21:32:25.982803 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-27 21:32:25.982812 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-27 21:32:25.982822 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-27 21:32:25.982831 | orchestrator | 2025-09-27 21:32:25.982848 | orchestrator | TASK [common : Check common containers] **************************************** 2025-09-27 21:32:25.982858 | orchestrator | Saturday 27 September 2025 21:30:55 +0000 (0:00:02.413) 0:00:42.513 **** 2025-09-27 21:32:25.982868 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 21:32:25.982882 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 21:32:25.982893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 21:32:25.982903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 21:32:25.982913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 21:32:25.982928 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 21:32:25.982939 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-27 21:32:25.982949 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.982964 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.982979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.982989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.982999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.983023 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.983034 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.983051 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.983061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.983071 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.983086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.983096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.983106 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.983116 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:32:25.983126 | orchestrator | 2025-09-27 21:32:25.983140 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-09-27 21:32:25.983150 | orchestrator | Saturday 27 September 2025 21:30:59 +0000 (0:00:03.403) 0:00:45.916 **** 2025-09-27 21:32:25.983159 | orchestrator | changed: [testbed-manager] 2025-09-27 21:32:25.983169 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:32:25.983178 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:32:25.983188 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:32:25.983198 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:32:25.983217 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:32:25.983226 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:32:25.983236 | orchestrator | 2025-09-27 21:32:25.983246 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-09-27 21:32:25.983255 | orchestrator | Saturday 27 September 2025 21:31:00 +0000 (0:00:01.718) 0:00:47.635 **** 2025-09-27 21:32:25.983265 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:32:25.983274 | orchestrator | changed: [testbed-manager] 2025-09-27 21:32:25.983284 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:32:25.983293 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:32:25.983303 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:32:25.983312 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:32:25.983322 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:32:25.983331 | orchestrator | 2025-09-27 21:32:25.983341 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-27 21:32:25.983350 | orchestrator | Saturday 27 September 2025 21:31:02 +0000 (0:00:01.228) 0:00:48.864 **** 2025-09-27 21:32:25.983360 | orchestrator | 2025-09-27 21:32:25.983369 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-27 21:32:25.983379 | orchestrator | Saturday 27 September 2025 21:31:02 +0000 (0:00:00.065) 0:00:48.929 **** 2025-09-27 21:32:25.983388 | orchestrator | 2025-09-27 21:32:25.983398 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-27 21:32:25.983407 | orchestrator | Saturday 27 September 2025 21:31:02 +0000 (0:00:00.067) 0:00:48.997 **** 2025-09-27 21:32:25.983417 | orchestrator | 2025-09-27 21:32:25.983427 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-27 21:32:25.983436 | orchestrator | Saturday 27 September 2025 21:31:02 +0000 (0:00:00.080) 0:00:49.078 **** 2025-09-27 21:32:25.983446 | orchestrator | 2025-09-27 21:32:25.983455 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-27 21:32:25.983465 | orchestrator | Saturday 27 September 2025 21:31:02 +0000 (0:00:00.349) 0:00:49.428 **** 2025-09-27 21:32:25.983474 | orchestrator | 2025-09-27 21:32:25.983484 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-27 21:32:25.983493 | orchestrator | Saturday 27 September 2025 21:31:02 +0000 (0:00:00.212) 0:00:49.640 **** 2025-09-27 21:32:25.983503 | orchestrator | 2025-09-27 21:32:25.983512 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-27 21:32:25.983522 | orchestrator | Saturday 27 September 2025 21:31:02 +0000 (0:00:00.093) 0:00:49.733 **** 2025-09-27 21:32:25.983532 | orchestrator | 2025-09-27 21:32:25.983541 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-09-27 21:32:25.983550 | orchestrator | Saturday 27 September 2025 21:31:03 +0000 (0:00:00.070) 0:00:49.804 **** 2025-09-27 21:32:25.983560 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:32:25.983570 | orchestrator | changed: [testbed-manager] 2025-09-27 21:32:25.983579 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:32:25.983589 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:32:25.983598 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:32:25.983608 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:32:25.983617 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:32:25.983642 | orchestrator | 2025-09-27 21:32:25.983655 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-09-27 21:32:25.983665 | orchestrator | Saturday 27 September 2025 21:31:40 +0000 (0:00:37.003) 0:01:26.807 **** 2025-09-27 21:32:25.983675 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:32:25.983685 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:32:25.983694 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:32:25.983704 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:32:25.983713 | orchestrator | changed: [testbed-manager] 2025-09-27 21:32:25.983723 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:32:25.983732 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:32:25.983742 | orchestrator | 2025-09-27 21:32:25.983751 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-09-27 21:32:25.983768 | orchestrator | Saturday 27 September 2025 21:32:12 +0000 (0:00:32.608) 0:01:59.416 **** 2025-09-27 21:32:25.983778 | orchestrator | ok: [testbed-manager] 2025-09-27 21:32:25.983788 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:32:25.983797 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:32:25.983807 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:32:25.983816 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:32:25.983826 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:32:25.983835 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:32:25.983845 | orchestrator | 2025-09-27 21:32:25.983854 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-09-27 21:32:25.983864 | orchestrator | Saturday 27 September 2025 21:32:14 +0000 (0:00:01.930) 0:02:01.347 **** 2025-09-27 21:32:25.983873 | orchestrator | changed: [testbed-manager] 2025-09-27 21:32:25.983883 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:32:25.983893 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:32:25.983902 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:32:25.983912 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:32:25.983921 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:32:25.983930 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:32:25.983940 | orchestrator | 2025-09-27 21:32:25.983950 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:32:25.983960 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-27 21:32:25.983970 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-27 21:32:25.983985 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-27 21:32:25.983995 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-27 21:32:25.984005 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-27 21:32:25.984015 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-27 21:32:25.984024 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-27 21:32:25.984034 | orchestrator | 2025-09-27 21:32:25.984044 | orchestrator | 2025-09-27 21:32:25.984053 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:32:25.984063 | orchestrator | Saturday 27 September 2025 21:32:23 +0000 (0:00:09.371) 0:02:10.718 **** 2025-09-27 21:32:25.984073 | orchestrator | =============================================================================== 2025-09-27 21:32:25.984082 | orchestrator | common : Restart fluentd container ------------------------------------- 37.00s 2025-09-27 21:32:25.984092 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 32.61s 2025-09-27 21:32:25.984101 | orchestrator | common : Restart cron container ----------------------------------------- 9.37s 2025-09-27 21:32:25.984111 | orchestrator | common : Copying over config.json files for services -------------------- 6.23s 2025-09-27 21:32:25.984121 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.57s 2025-09-27 21:32:25.984130 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.87s 2025-09-27 21:32:25.984140 | orchestrator | common : Check common containers ---------------------------------------- 3.40s 2025-09-27 21:32:25.984149 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.26s 2025-09-27 21:32:25.984159 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.99s 2025-09-27 21:32:25.984175 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.41s 2025-09-27 21:32:25.984184 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.31s 2025-09-27 21:32:25.984194 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.19s 2025-09-27 21:32:25.984204 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.08s 2025-09-27 21:32:25.984213 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.00s 2025-09-27 21:32:25.984223 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.95s 2025-09-27 21:32:25.984232 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.93s 2025-09-27 21:32:25.984242 | orchestrator | common : Creating log volume -------------------------------------------- 1.72s 2025-09-27 21:32:25.984252 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.35s 2025-09-27 21:32:25.984261 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.23s 2025-09-27 21:32:25.984278 | orchestrator | common : include_tasks -------------------------------------------------- 1.17s 2025-09-27 21:32:25.984288 | orchestrator | 2025-09-27 21:32:25 | INFO  | Task 3054d029-5579-48ea-940e-3d134520c412 is in state STARTED 2025-09-27 21:32:25.984298 | orchestrator | 2025-09-27 21:32:25 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:32:25.984308 | orchestrator | 2025-09-27 21:32:25 | INFO  | Task 0754cdfa-26cd-4a19-9c6b-49c2648ad85f is in state STARTED 2025-09-27 21:32:25.984318 | orchestrator | 2025-09-27 21:32:25 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:32:29.019503 | orchestrator | 2025-09-27 21:32:29 | INFO  | Task e0b6c16f-4967-4f38-8999-9c470835ea05 is in state STARTED 2025-09-27 21:32:29.025834 | orchestrator | 2025-09-27 21:32:29 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:32:29.026808 | orchestrator | 2025-09-27 21:32:29 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:32:29.027404 | orchestrator | 2025-09-27 21:32:29 | INFO  | Task 3054d029-5579-48ea-940e-3d134520c412 is in state STARTED 2025-09-27 21:32:29.031535 | orchestrator | 2025-09-27 21:32:29 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:32:29.032034 | orchestrator | 2025-09-27 21:32:29 | INFO  | Task 0754cdfa-26cd-4a19-9c6b-49c2648ad85f is in state STARTED 2025-09-27 21:32:29.032058 | orchestrator | 2025-09-27 21:32:29 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:32:32.057070 | orchestrator | 2025-09-27 21:32:32 | INFO  | Task e0b6c16f-4967-4f38-8999-9c470835ea05 is in state STARTED 2025-09-27 21:32:32.058742 | orchestrator | 2025-09-27 21:32:32 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:32:32.063183 | orchestrator | 2025-09-27 21:32:32 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:32:32.065932 | orchestrator | 2025-09-27 21:32:32 | INFO  | Task 3054d029-5579-48ea-940e-3d134520c412 is in state STARTED 2025-09-27 21:32:32.070826 | orchestrator | 2025-09-27 21:32:32 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:32:32.071717 | orchestrator | 2025-09-27 21:32:32 | INFO  | Task 0754cdfa-26cd-4a19-9c6b-49c2648ad85f is in state STARTED 2025-09-27 21:32:32.071741 | orchestrator | 2025-09-27 21:32:32 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:32:35.108829 | orchestrator | 2025-09-27 21:32:35 | INFO  | Task e0b6c16f-4967-4f38-8999-9c470835ea05 is in state STARTED 2025-09-27 21:32:35.109032 | orchestrator | 2025-09-27 21:32:35 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:32:35.110155 | orchestrator | 2025-09-27 21:32:35 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:32:35.110542 | orchestrator | 2025-09-27 21:32:35 | INFO  | Task 3054d029-5579-48ea-940e-3d134520c412 is in state STARTED 2025-09-27 21:32:35.111277 | orchestrator | 2025-09-27 21:32:35 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:32:35.112005 | orchestrator | 2025-09-27 21:32:35 | INFO  | Task 0754cdfa-26cd-4a19-9c6b-49c2648ad85f is in state STARTED 2025-09-27 21:32:35.113564 | orchestrator | 2025-09-27 21:32:35 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:32:38.139788 | orchestrator | 2025-09-27 21:32:38 | INFO  | Task e0b6c16f-4967-4f38-8999-9c470835ea05 is in state STARTED 2025-09-27 21:32:38.139895 | orchestrator | 2025-09-27 21:32:38 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:32:38.140390 | orchestrator | 2025-09-27 21:32:38 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:32:38.141217 | orchestrator | 2025-09-27 21:32:38 | INFO  | Task 3054d029-5579-48ea-940e-3d134520c412 is in state STARTED 2025-09-27 21:32:38.141667 | orchestrator | 2025-09-27 21:32:38 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:32:38.143360 | orchestrator | 2025-09-27 21:32:38 | INFO  | Task 0754cdfa-26cd-4a19-9c6b-49c2648ad85f is in state STARTED 2025-09-27 21:32:38.143391 | orchestrator | 2025-09-27 21:32:38 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:32:41.171506 | orchestrator | 2025-09-27 21:32:41 | INFO  | Task e0b6c16f-4967-4f38-8999-9c470835ea05 is in state STARTED 2025-09-27 21:32:41.172057 | orchestrator | 2025-09-27 21:32:41 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:32:41.173697 | orchestrator | 2025-09-27 21:32:41 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:32:41.175426 | orchestrator | 2025-09-27 21:32:41 | INFO  | Task 3054d029-5579-48ea-940e-3d134520c412 is in state STARTED 2025-09-27 21:32:41.177876 | orchestrator | 2025-09-27 21:32:41 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:32:41.178753 | orchestrator | 2025-09-27 21:32:41 | INFO  | Task 0754cdfa-26cd-4a19-9c6b-49c2648ad85f is in state STARTED 2025-09-27 21:32:41.178778 | orchestrator | 2025-09-27 21:32:41 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:32:44.209666 | orchestrator | 2025-09-27 21:32:44 | INFO  | Task e0b6c16f-4967-4f38-8999-9c470835ea05 is in state STARTED 2025-09-27 21:32:44.211667 | orchestrator | 2025-09-27 21:32:44 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:32:44.213468 | orchestrator | 2025-09-27 21:32:44 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:32:44.215360 | orchestrator | 2025-09-27 21:32:44 | INFO  | Task 3054d029-5579-48ea-940e-3d134520c412 is in state STARTED 2025-09-27 21:32:44.216740 | orchestrator | 2025-09-27 21:32:44 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:32:44.217867 | orchestrator | 2025-09-27 21:32:44 | INFO  | Task 0754cdfa-26cd-4a19-9c6b-49c2648ad85f is in state STARTED 2025-09-27 21:32:44.217893 | orchestrator | 2025-09-27 21:32:44 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:32:47.248434 | orchestrator | 2025-09-27 21:32:47 | INFO  | Task e0b6c16f-4967-4f38-8999-9c470835ea05 is in state SUCCESS 2025-09-27 21:32:47.250088 | orchestrator | 2025-09-27 21:32:47 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:32:47.250755 | orchestrator | 2025-09-27 21:32:47 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:32:47.251448 | orchestrator | 2025-09-27 21:32:47 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:32:47.252825 | orchestrator | 2025-09-27 21:32:47 | INFO  | Task 3054d029-5579-48ea-940e-3d134520c412 is in state STARTED 2025-09-27 21:32:47.253435 | orchestrator | 2025-09-27 21:32:47 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:32:47.255270 | orchestrator | 2025-09-27 21:32:47 | INFO  | Task 0754cdfa-26cd-4a19-9c6b-49c2648ad85f is in state STARTED 2025-09-27 21:32:47.255300 | orchestrator | 2025-09-27 21:32:47 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:32:50.280466 | orchestrator | 2025-09-27 21:32:50 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:32:50.281119 | orchestrator | 2025-09-27 21:32:50 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:32:50.282118 | orchestrator | 2025-09-27 21:32:50 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:32:50.283768 | orchestrator | 2025-09-27 21:32:50 | INFO  | Task 3054d029-5579-48ea-940e-3d134520c412 is in state STARTED 2025-09-27 21:32:50.284479 | orchestrator | 2025-09-27 21:32:50 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:32:50.286066 | orchestrator | 2025-09-27 21:32:50 | INFO  | Task 0754cdfa-26cd-4a19-9c6b-49c2648ad85f is in state STARTED 2025-09-27 21:32:50.286092 | orchestrator | 2025-09-27 21:32:50 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:32:53.309116 | orchestrator | 2025-09-27 21:32:53 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:32:53.309439 | orchestrator | 2025-09-27 21:32:53 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:32:53.310068 | orchestrator | 2025-09-27 21:32:53 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:32:53.310706 | orchestrator | 2025-09-27 21:32:53 | INFO  | Task 3054d029-5579-48ea-940e-3d134520c412 is in state STARTED 2025-09-27 21:32:53.311347 | orchestrator | 2025-09-27 21:32:53 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:32:53.314164 | orchestrator | 2025-09-27 21:32:53 | INFO  | Task 0754cdfa-26cd-4a19-9c6b-49c2648ad85f is in state STARTED 2025-09-27 21:32:53.314190 | orchestrator | 2025-09-27 21:32:53 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:32:56.362285 | orchestrator | 2025-09-27 21:32:56 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:32:56.362400 | orchestrator | 2025-09-27 21:32:56 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:32:56.364997 | orchestrator | 2025-09-27 21:32:56 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:32:56.365951 | orchestrator | 2025-09-27 21:32:56 | INFO  | Task 3054d029-5579-48ea-940e-3d134520c412 is in state STARTED 2025-09-27 21:32:56.366418 | orchestrator | 2025-09-27 21:32:56 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:32:56.368679 | orchestrator | 2025-09-27 21:32:56 | INFO  | Task 0754cdfa-26cd-4a19-9c6b-49c2648ad85f is in state STARTED 2025-09-27 21:32:56.368704 | orchestrator | 2025-09-27 21:32:56 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:32:59.551308 | orchestrator | 2025-09-27 21:32:59 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:32:59.555860 | orchestrator | 2025-09-27 21:32:59 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:32:59.556346 | orchestrator | 2025-09-27 21:32:59 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:32:59.561334 | orchestrator | 2025-09-27 21:32:59 | INFO  | Task 3054d029-5579-48ea-940e-3d134520c412 is in state STARTED 2025-09-27 21:32:59.561367 | orchestrator | 2025-09-27 21:32:59 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:32:59.563369 | orchestrator | 2025-09-27 21:32:59.563434 | orchestrator | 2025-09-27 21:32:59.563449 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 21:32:59.563462 | orchestrator | 2025-09-27 21:32:59.563473 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 21:32:59.563485 | orchestrator | Saturday 27 September 2025 21:32:31 +0000 (0:00:00.451) 0:00:00.451 **** 2025-09-27 21:32:59.563496 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:32:59.563508 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:32:59.563519 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:32:59.563530 | orchestrator | 2025-09-27 21:32:59.563541 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 21:32:59.563552 | orchestrator | Saturday 27 September 2025 21:32:32 +0000 (0:00:00.415) 0:00:00.866 **** 2025-09-27 21:32:59.563563 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-09-27 21:32:59.563575 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-09-27 21:32:59.563586 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-09-27 21:32:59.563623 | orchestrator | 2025-09-27 21:32:59.563635 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-09-27 21:32:59.563646 | orchestrator | 2025-09-27 21:32:59.563658 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-09-27 21:32:59.563669 | orchestrator | Saturday 27 September 2025 21:32:32 +0000 (0:00:00.533) 0:00:01.400 **** 2025-09-27 21:32:59.563680 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:32:59.563692 | orchestrator | 2025-09-27 21:32:59.563703 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-09-27 21:32:59.563714 | orchestrator | Saturday 27 September 2025 21:32:33 +0000 (0:00:00.849) 0:00:02.249 **** 2025-09-27 21:32:59.563725 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-27 21:32:59.563736 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-27 21:32:59.563747 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-27 21:32:59.563758 | orchestrator | 2025-09-27 21:32:59.563768 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-09-27 21:32:59.563779 | orchestrator | Saturday 27 September 2025 21:32:34 +0000 (0:00:00.840) 0:00:03.090 **** 2025-09-27 21:32:59.563790 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-27 21:32:59.563801 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-27 21:32:59.563812 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-27 21:32:59.563823 | orchestrator | 2025-09-27 21:32:59.563833 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-09-27 21:32:59.563844 | orchestrator | Saturday 27 September 2025 21:32:36 +0000 (0:00:02.060) 0:00:05.151 **** 2025-09-27 21:32:59.563855 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:32:59.563866 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:32:59.563876 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:32:59.563887 | orchestrator | 2025-09-27 21:32:59.563898 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-09-27 21:32:59.563909 | orchestrator | Saturday 27 September 2025 21:32:38 +0000 (0:00:01.926) 0:00:07.078 **** 2025-09-27 21:32:59.563919 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:32:59.563930 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:32:59.563961 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:32:59.563973 | orchestrator | 2025-09-27 21:32:59.563986 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:32:59.563998 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:32:59.564012 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:32:59.564036 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:32:59.564049 | orchestrator | 2025-09-27 21:32:59.564062 | orchestrator | 2025-09-27 21:32:59.564074 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:32:59.564086 | orchestrator | Saturday 27 September 2025 21:32:45 +0000 (0:00:07.171) 0:00:14.250 **** 2025-09-27 21:32:59.564098 | orchestrator | =============================================================================== 2025-09-27 21:32:59.564110 | orchestrator | memcached : Restart memcached container --------------------------------- 7.17s 2025-09-27 21:32:59.564122 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.06s 2025-09-27 21:32:59.564134 | orchestrator | memcached : Check memcached container ----------------------------------- 1.93s 2025-09-27 21:32:59.564146 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.85s 2025-09-27 21:32:59.564158 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.84s 2025-09-27 21:32:59.564170 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.53s 2025-09-27 21:32:59.564182 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.42s 2025-09-27 21:32:59.564194 | orchestrator | 2025-09-27 21:32:59.564206 | orchestrator | 2025-09-27 21:32:59.564218 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 21:32:59.564230 | orchestrator | 2025-09-27 21:32:59.564241 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 21:32:59.564253 | orchestrator | Saturday 27 September 2025 21:32:31 +0000 (0:00:00.251) 0:00:00.251 **** 2025-09-27 21:32:59.564265 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:32:59.564278 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:32:59.564290 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:32:59.564301 | orchestrator | 2025-09-27 21:32:59.564312 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 21:32:59.564338 | orchestrator | Saturday 27 September 2025 21:32:31 +0000 (0:00:00.400) 0:00:00.651 **** 2025-09-27 21:32:59.564349 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-09-27 21:32:59.564360 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-09-27 21:32:59.564371 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-09-27 21:32:59.564382 | orchestrator | 2025-09-27 21:32:59.564393 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-09-27 21:32:59.564404 | orchestrator | 2025-09-27 21:32:59.564415 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-09-27 21:32:59.564426 | orchestrator | Saturday 27 September 2025 21:32:32 +0000 (0:00:00.628) 0:00:01.280 **** 2025-09-27 21:32:59.564437 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:32:59.564448 | orchestrator | 2025-09-27 21:32:59.564459 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-09-27 21:32:59.564470 | orchestrator | Saturday 27 September 2025 21:32:33 +0000 (0:00:00.691) 0:00:01.972 **** 2025-09-27 21:32:59.564483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-27 21:32:59.564507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-27 21:32:59.564519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-27 21:32:59.564536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-27 21:32:59.564548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-27 21:32:59.564567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-27 21:32:59.564580 | orchestrator | 2025-09-27 21:32:59.564591 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-09-27 21:32:59.564627 | orchestrator | Saturday 27 September 2025 21:32:34 +0000 (0:00:01.642) 0:00:03.614 **** 2025-09-27 21:32:59.564639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-27 21:32:59.564658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-27 21:32:59.564669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-27 21:32:59.564685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-27 21:32:59.564697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-27 21:32:59.564715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-27 21:32:59.564726 | orchestrator | 2025-09-27 21:32:59.564737 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-09-27 21:32:59.564749 | orchestrator | Saturday 27 September 2025 21:32:37 +0000 (0:00:03.017) 0:00:06.631 **** 2025-09-27 21:32:59.564760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-27 21:32:59.564778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-27 21:32:59.564790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-27 21:32:59.564801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-27 21:32:59.564817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-27 21:32:59.564829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-27 21:32:59.564840 | orchestrator | 2025-09-27 21:32:59.564856 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-09-27 21:32:59.564867 | orchestrator | Saturday 27 September 2025 21:32:40 +0000 (0:00:02.651) 0:00:09.283 **** 2025-09-27 21:32:59.564879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-27 21:32:59.564897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-27 21:32:59.564908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-27 21:32:59.564920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-27 21:32:59.564936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-27 21:32:59.564947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-27 21:32:59.564959 | orchestrator | 2025-09-27 21:32:59.564969 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-27 21:32:59.564980 | orchestrator | Saturday 27 September 2025 21:32:42 +0000 (0:00:01.841) 0:00:11.125 **** 2025-09-27 21:32:59.564991 | orchestrator | 2025-09-27 21:32:59.565002 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-27 21:32:59.565025 | orchestrator | Saturday 27 September 2025 21:32:42 +0000 (0:00:00.085) 0:00:11.210 **** 2025-09-27 21:32:59.565036 | orchestrator | 2025-09-27 21:32:59.565047 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-27 21:32:59.565058 | orchestrator | Saturday 27 September 2025 21:32:42 +0000 (0:00:00.080) 0:00:11.291 **** 2025-09-27 21:32:59.565069 | orchestrator | 2025-09-27 21:32:59.565080 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-09-27 21:32:59.565091 | orchestrator | Saturday 27 September 2025 21:32:42 +0000 (0:00:00.114) 0:00:11.405 **** 2025-09-27 21:32:59.565101 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:32:59.565112 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:32:59.565123 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:32:59.565134 | orchestrator | 2025-09-27 21:32:59.565145 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-09-27 21:32:59.565156 | orchestrator | Saturday 27 September 2025 21:32:50 +0000 (0:00:07.746) 0:00:19.152 **** 2025-09-27 21:32:59.565167 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:32:59.565178 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:32:59.565188 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:32:59.565199 | orchestrator | 2025-09-27 21:32:59.565210 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:32:59.565221 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:32:59.565232 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:32:59.565243 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:32:59.565254 | orchestrator | 2025-09-27 21:32:59.565265 | orchestrator | 2025-09-27 21:32:59.565276 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:32:59.565287 | orchestrator | Saturday 27 September 2025 21:32:57 +0000 (0:00:07.118) 0:00:26.271 **** 2025-09-27 21:32:59.565297 | orchestrator | =============================================================================== 2025-09-27 21:32:59.565308 | orchestrator | redis : Restart redis container ----------------------------------------- 7.75s 2025-09-27 21:32:59.565319 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 7.12s 2025-09-27 21:32:59.565387 | orchestrator | redis : Copying over default config.json files -------------------------- 3.02s 2025-09-27 21:32:59.565398 | orchestrator | redis : Copying over redis config files --------------------------------- 2.65s 2025-09-27 21:32:59.565409 | orchestrator | redis : Check redis containers ------------------------------------------ 1.84s 2025-09-27 21:32:59.565420 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.64s 2025-09-27 21:32:59.565431 | orchestrator | redis : include_tasks --------------------------------------------------- 0.69s 2025-09-27 21:32:59.565441 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2025-09-27 21:32:59.565452 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.40s 2025-09-27 21:32:59.565463 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.28s 2025-09-27 21:32:59.565474 | orchestrator | 2025-09-27 21:32:59 | INFO  | Task 0754cdfa-26cd-4a19-9c6b-49c2648ad85f is in state SUCCESS 2025-09-27 21:32:59.565485 | orchestrator | 2025-09-27 21:32:59 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:33:02.584083 | orchestrator | 2025-09-27 21:33:02 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:33:02.586427 | orchestrator | 2025-09-27 21:33:02 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:33:02.586460 | orchestrator | 2025-09-27 21:33:02 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:33:02.586493 | orchestrator | 2025-09-27 21:33:02 | INFO  | Task 3054d029-5579-48ea-940e-3d134520c412 is in state STARTED 2025-09-27 21:33:02.589451 | orchestrator | 2025-09-27 21:33:02 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:33:02.589483 | orchestrator | 2025-09-27 21:33:02 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:33:05.638335 | orchestrator | 2025-09-27 21:33:05 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:33:05.638427 | orchestrator | 2025-09-27 21:33:05 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:33:05.638442 | orchestrator | 2025-09-27 21:33:05 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:33:05.638454 | orchestrator | 2025-09-27 21:33:05 | INFO  | Task 3054d029-5579-48ea-940e-3d134520c412 is in state STARTED 2025-09-27 21:33:05.638465 | orchestrator | 2025-09-27 21:33:05 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:33:05.638476 | orchestrator | 2025-09-27 21:33:05 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:33:08.898471 | orchestrator | 2025-09-27 21:33:08 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:33:08.898976 | orchestrator | 2025-09-27 21:33:08 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:33:08.900545 | orchestrator | 2025-09-27 21:33:08 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:33:08.901256 | orchestrator | 2025-09-27 21:33:08 | INFO  | Task 3054d029-5579-48ea-940e-3d134520c412 is in state STARTED 2025-09-27 21:33:08.901929 | orchestrator | 2025-09-27 21:33:08 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:33:08.902342 | orchestrator | 2025-09-27 21:33:08 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:33:11.927928 | orchestrator | 2025-09-27 21:33:11 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:33:11.928173 | orchestrator | 2025-09-27 21:33:11 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:33:11.928954 | orchestrator | 2025-09-27 21:33:11 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:33:11.931822 | orchestrator | 2025-09-27 21:33:11 | INFO  | Task 3054d029-5579-48ea-940e-3d134520c412 is in state STARTED 2025-09-27 21:33:11.932522 | orchestrator | 2025-09-27 21:33:11 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:33:11.932564 | orchestrator | 2025-09-27 21:33:11 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:33:14.960571 | orchestrator | 2025-09-27 21:33:14 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:33:14.961258 | orchestrator | 2025-09-27 21:33:14 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:33:14.961985 | orchestrator | 2025-09-27 21:33:14 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:33:14.962395 | orchestrator | 2025-09-27 21:33:14 | INFO  | Task 3054d029-5579-48ea-940e-3d134520c412 is in state STARTED 2025-09-27 21:33:14.963705 | orchestrator | 2025-09-27 21:33:14 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:33:14.963933 | orchestrator | 2025-09-27 21:33:14 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:33:18.011071 | orchestrator | 2025-09-27 21:33:18 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:33:18.011704 | orchestrator | 2025-09-27 21:33:18 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:33:18.014067 | orchestrator | 2025-09-27 21:33:18 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:33:18.014739 | orchestrator | 2025-09-27 21:33:18 | INFO  | Task 3054d029-5579-48ea-940e-3d134520c412 is in state STARTED 2025-09-27 21:33:18.016798 | orchestrator | 2025-09-27 21:33:18 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:33:18.016825 | orchestrator | 2025-09-27 21:33:18 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:33:21.062522 | orchestrator | 2025-09-27 21:33:21 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:33:21.063132 | orchestrator | 2025-09-27 21:33:21 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:33:21.064567 | orchestrator | 2025-09-27 21:33:21 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:33:21.066128 | orchestrator | 2025-09-27 21:33:21 | INFO  | Task 3054d029-5579-48ea-940e-3d134520c412 is in state STARTED 2025-09-27 21:33:21.067439 | orchestrator | 2025-09-27 21:33:21 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:33:21.067466 | orchestrator | 2025-09-27 21:33:21 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:33:24.171817 | orchestrator | 2025-09-27 21:33:24 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:33:24.171910 | orchestrator | 2025-09-27 21:33:24 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:33:24.171924 | orchestrator | 2025-09-27 21:33:24 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:33:24.171936 | orchestrator | 2025-09-27 21:33:24 | INFO  | Task 3054d029-5579-48ea-940e-3d134520c412 is in state STARTED 2025-09-27 21:33:24.171947 | orchestrator | 2025-09-27 21:33:24 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:33:24.171958 | orchestrator | 2025-09-27 21:33:24 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:33:27.161919 | orchestrator | 2025-09-27 21:33:27 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:33:27.162182 | orchestrator | 2025-09-27 21:33:27 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:33:27.163160 | orchestrator | 2025-09-27 21:33:27 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:33:27.164559 | orchestrator | 2025-09-27 21:33:27 | INFO  | Task 3054d029-5579-48ea-940e-3d134520c412 is in state SUCCESS 2025-09-27 21:33:27.166840 | orchestrator | 2025-09-27 21:33:27.166880 | orchestrator | 2025-09-27 21:33:27.166892 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 21:33:27.166904 | orchestrator | 2025-09-27 21:33:27.166916 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 21:33:27.166927 | orchestrator | Saturday 27 September 2025 21:32:31 +0000 (0:00:00.315) 0:00:00.315 **** 2025-09-27 21:33:27.166938 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:33:27.166959 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:33:27.166972 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:33:27.166983 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:33:27.166994 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:33:27.167004 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:33:27.167015 | orchestrator | 2025-09-27 21:33:27.167026 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 21:33:27.167037 | orchestrator | Saturday 27 September 2025 21:32:32 +0000 (0:00:01.053) 0:00:01.368 **** 2025-09-27 21:33:27.167066 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-27 21:33:27.167077 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-27 21:33:27.167088 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-27 21:33:27.167107 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-27 21:33:27.167118 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-27 21:33:27.167129 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-27 21:33:27.167140 | orchestrator | 2025-09-27 21:33:27.167150 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-09-27 21:33:27.167166 | orchestrator | 2025-09-27 21:33:27.167185 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-09-27 21:33:27.167206 | orchestrator | Saturday 27 September 2025 21:32:33 +0000 (0:00:01.205) 0:00:02.574 **** 2025-09-27 21:33:27.167225 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:33:27.167246 | orchestrator | 2025-09-27 21:33:27.167266 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-27 21:33:27.167279 | orchestrator | Saturday 27 September 2025 21:32:35 +0000 (0:00:01.561) 0:00:04.135 **** 2025-09-27 21:33:27.167290 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-27 21:33:27.167302 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-27 21:33:27.167319 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-27 21:33:27.167340 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-27 21:33:27.167360 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-27 21:33:27.167380 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-27 21:33:27.167400 | orchestrator | 2025-09-27 21:33:27.167419 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-27 21:33:27.167440 | orchestrator | Saturday 27 September 2025 21:32:36 +0000 (0:00:01.417) 0:00:05.553 **** 2025-09-27 21:33:27.167460 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-27 21:33:27.167480 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-27 21:33:27.167506 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-27 21:33:27.167527 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-27 21:33:27.167546 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-27 21:33:27.167562 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-27 21:33:27.167627 | orchestrator | 2025-09-27 21:33:27.167645 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-27 21:33:27.167659 | orchestrator | Saturday 27 September 2025 21:32:38 +0000 (0:00:01.673) 0:00:07.227 **** 2025-09-27 21:33:27.167670 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-09-27 21:33:27.167681 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:27.167693 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-09-27 21:33:27.167704 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:33:27.167714 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-09-27 21:33:27.167725 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:33:27.167736 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-09-27 21:33:27.167746 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:33:27.167757 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-09-27 21:33:27.167768 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:33:27.167779 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-09-27 21:33:27.167790 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:33:27.167810 | orchestrator | 2025-09-27 21:33:27.167821 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-09-27 21:33:27.167832 | orchestrator | Saturday 27 September 2025 21:32:39 +0000 (0:00:01.197) 0:00:08.424 **** 2025-09-27 21:33:27.167842 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:27.167853 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:33:27.167864 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:33:27.167874 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:33:27.167885 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:33:27.167896 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:33:27.167906 | orchestrator | 2025-09-27 21:33:27.167917 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-09-27 21:33:27.167928 | orchestrator | Saturday 27 September 2025 21:32:40 +0000 (0:00:00.533) 0:00:08.958 **** 2025-09-27 21:33:27.167957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-27 21:33:27.167974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-27 21:33:27.167987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-27 21:33:27.168004 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-27 21:33:27.168016 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-27 21:33:27.168034 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-27 21:33:27.168053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-27 21:33:27.168065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-27 21:33:27.168076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-27 21:33:27.168092 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-27 21:33:27.168110 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-27 21:33:27.168127 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-27 21:33:27.168139 | orchestrator | 2025-09-27 21:33:27.168150 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-09-27 21:33:27.168161 | orchestrator | Saturday 27 September 2025 21:32:41 +0000 (0:00:01.896) 0:00:10.854 **** 2025-09-27 21:33:27.168173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-27 21:33:27.168185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-27 21:33:27.168196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-27 21:33:27.168212 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-27 21:33:27.168229 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-27 21:33:27.168255 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-27 21:33:27.168267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-27 21:33:27.168279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-27 21:33:27.168294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-27 21:33:27.168312 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-27 21:33:27.168323 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-27 21:33:27.168342 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-27 21:33:27.168354 | orchestrator | 2025-09-27 21:33:27.168365 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-09-27 21:33:27.168376 | orchestrator | Saturday 27 September 2025 21:32:44 +0000 (0:00:02.441) 0:00:13.296 **** 2025-09-27 21:33:27.168387 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:27.168398 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:33:27.168409 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:33:27.168419 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:33:27.168430 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:33:27.168441 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:33:27.168452 | orchestrator | 2025-09-27 21:33:27.168462 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-09-27 21:33:27.168473 | orchestrator | Saturday 27 September 2025 21:32:45 +0000 (0:00:00.848) 0:00:14.144 **** 2025-09-27 21:33:27.168484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-27 21:33:27.168500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-27 21:33:27.168518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-27 21:33:27.168530 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-27 21:33:27.168547 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-27 21:33:27.168558 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-27 21:33:27.168592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-27 21:33:27.168620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-27 21:33:27.168632 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-27 21:33:27.168644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-27 21:33:27.168662 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-27 21:33:27.168674 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-27 21:33:27.168685 | orchestrator | 2025-09-27 21:33:27.168696 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-27 21:33:27.168707 | orchestrator | Saturday 27 September 2025 21:32:47 +0000 (0:00:02.254) 0:00:16.399 **** 2025-09-27 21:33:27.168718 | orchestrator | 2025-09-27 21:33:27.168735 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-27 21:33:27.168746 | orchestrator | Saturday 27 September 2025 21:32:47 +0000 (0:00:00.291) 0:00:16.691 **** 2025-09-27 21:33:27.168757 | orchestrator | 2025-09-27 21:33:27.168768 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-27 21:33:27.168779 | orchestrator | Saturday 27 September 2025 21:32:47 +0000 (0:00:00.127) 0:00:16.818 **** 2025-09-27 21:33:27.168789 | orchestrator | 2025-09-27 21:33:27.168800 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-27 21:33:27.168811 | orchestrator | Saturday 27 September 2025 21:32:48 +0000 (0:00:00.130) 0:00:16.948 **** 2025-09-27 21:33:27.168821 | orchestrator | 2025-09-27 21:33:27.168835 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-27 21:33:27.168853 | orchestrator | Saturday 27 September 2025 21:32:48 +0000 (0:00:00.179) 0:00:17.129 **** 2025-09-27 21:33:27.168870 | orchestrator | 2025-09-27 21:33:27.168890 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-27 21:33:27.168909 | orchestrator | Saturday 27 September 2025 21:32:48 +0000 (0:00:00.247) 0:00:17.377 **** 2025-09-27 21:33:27.168927 | orchestrator | 2025-09-27 21:33:27.168939 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-09-27 21:33:27.168950 | orchestrator | Saturday 27 September 2025 21:32:48 +0000 (0:00:00.213) 0:00:17.590 **** 2025-09-27 21:33:27.168966 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:33:27.168977 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:33:27.168988 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:33:27.168999 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:33:27.169009 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:33:27.169020 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:33:27.169031 | orchestrator | 2025-09-27 21:33:27.169044 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-09-27 21:33:27.169063 | orchestrator | Saturday 27 September 2025 21:32:55 +0000 (0:00:06.328) 0:00:23.918 **** 2025-09-27 21:33:27.169082 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:33:27.169098 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:33:27.169109 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:33:27.169120 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:33:27.169137 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:33:27.169157 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:33:27.169176 | orchestrator | 2025-09-27 21:33:27.169195 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-27 21:33:27.169208 | orchestrator | Saturday 27 September 2025 21:32:56 +0000 (0:00:01.766) 0:00:25.685 **** 2025-09-27 21:33:27.169219 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:33:27.169230 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:33:27.169241 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:33:27.169252 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:33:27.169262 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:33:27.169273 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:33:27.169283 | orchestrator | 2025-09-27 21:33:27.169294 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-09-27 21:33:27.169305 | orchestrator | Saturday 27 September 2025 21:33:02 +0000 (0:00:05.837) 0:00:31.522 **** 2025-09-27 21:33:27.169315 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-09-27 21:33:27.169326 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-09-27 21:33:27.169337 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-09-27 21:33:27.169348 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-09-27 21:33:27.169359 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-09-27 21:33:27.169385 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-09-27 21:33:27.169396 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-09-27 21:33:27.169407 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-09-27 21:33:27.169418 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-09-27 21:33:27.169428 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-09-27 21:33:27.169439 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-09-27 21:33:27.169450 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-09-27 21:33:27.169460 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-27 21:33:27.169471 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-27 21:33:27.169482 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-27 21:33:27.169492 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-27 21:33:27.169503 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-27 21:33:27.169513 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-27 21:33:27.169524 | orchestrator | 2025-09-27 21:33:27.169539 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-09-27 21:33:27.169553 | orchestrator | Saturday 27 September 2025 21:33:11 +0000 (0:00:08.486) 0:00:40.010 **** 2025-09-27 21:33:27.169564 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-09-27 21:33:27.169838 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:33:27.169857 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-09-27 21:33:27.169869 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:33:27.169880 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-09-27 21:33:27.169892 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:33:27.169902 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-09-27 21:33:27.169913 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-09-27 21:33:27.169924 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-09-27 21:33:27.169935 | orchestrator | 2025-09-27 21:33:27.169948 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-09-27 21:33:27.169960 | orchestrator | Saturday 27 September 2025 21:33:13 +0000 (0:00:02.727) 0:00:42.737 **** 2025-09-27 21:33:27.169971 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-09-27 21:33:27.169998 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:33:27.170009 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-09-27 21:33:27.170057 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:33:27.170072 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-09-27 21:33:27.170082 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:33:27.170093 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-09-27 21:33:27.170104 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-09-27 21:33:27.170115 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-09-27 21:33:27.170126 | orchestrator | 2025-09-27 21:33:27.170137 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-27 21:33:27.170147 | orchestrator | Saturday 27 September 2025 21:33:17 +0000 (0:00:03.657) 0:00:46.394 **** 2025-09-27 21:33:27.170173 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:33:27.170183 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:33:27.170193 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:33:27.170202 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:33:27.170212 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:33:27.170221 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:33:27.170231 | orchestrator | 2025-09-27 21:33:27.170241 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:33:27.170252 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-27 21:33:27.170263 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-27 21:33:27.170272 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-27 21:33:27.170282 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-27 21:33:27.170291 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-27 21:33:27.170323 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-27 21:33:27.170333 | orchestrator | 2025-09-27 21:33:27.170343 | orchestrator | 2025-09-27 21:33:27.170353 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:33:27.170362 | orchestrator | Saturday 27 September 2025 21:33:26 +0000 (0:00:08.667) 0:00:55.062 **** 2025-09-27 21:33:27.170372 | orchestrator | =============================================================================== 2025-09-27 21:33:27.170382 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 14.51s 2025-09-27 21:33:27.170391 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.49s 2025-09-27 21:33:27.170401 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 6.33s 2025-09-27 21:33:27.170410 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.66s 2025-09-27 21:33:27.170420 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.73s 2025-09-27 21:33:27.170429 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.44s 2025-09-27 21:33:27.170439 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.25s 2025-09-27 21:33:27.170448 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.90s 2025-09-27 21:33:27.170458 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.77s 2025-09-27 21:33:27.170467 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.67s 2025-09-27 21:33:27.170477 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.56s 2025-09-27 21:33:27.170486 | orchestrator | module-load : Load modules ---------------------------------------------- 1.42s 2025-09-27 21:33:27.170496 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.21s 2025-09-27 21:33:27.170505 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.20s 2025-09-27 21:33:27.170515 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.19s 2025-09-27 21:33:27.170524 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.05s 2025-09-27 21:33:27.170534 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.85s 2025-09-27 21:33:27.170543 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.53s 2025-09-27 21:33:27.170562 | orchestrator | 2025-09-27 21:33:27 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:33:27.170600 | orchestrator | 2025-09-27 21:33:27 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:33:30.210852 | orchestrator | 2025-09-27 21:33:30 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:33:30.216933 | orchestrator | 2025-09-27 21:33:30 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:33:30.218717 | orchestrator | 2025-09-27 21:33:30 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:33:30.221462 | orchestrator | 2025-09-27 21:33:30 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:33:30.222852 | orchestrator | 2025-09-27 21:33:30 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:33:30.222893 | orchestrator | 2025-09-27 21:33:30 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:33:33.255302 | orchestrator | 2025-09-27 21:33:33 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:33:33.257985 | orchestrator | 2025-09-27 21:33:33 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:33:33.258762 | orchestrator | 2025-09-27 21:33:33 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:33:33.259548 | orchestrator | 2025-09-27 21:33:33 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:33:33.260405 | orchestrator | 2025-09-27 21:33:33 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:33:33.261487 | orchestrator | 2025-09-27 21:33:33 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:33:36.293666 | orchestrator | 2025-09-27 21:33:36 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:33:36.294113 | orchestrator | 2025-09-27 21:33:36 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:33:36.294728 | orchestrator | 2025-09-27 21:33:36 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:33:36.295348 | orchestrator | 2025-09-27 21:33:36 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:33:36.296088 | orchestrator | 2025-09-27 21:33:36 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:33:36.296254 | orchestrator | 2025-09-27 21:33:36 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:33:39.484937 | orchestrator | 2025-09-27 21:33:39 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:33:39.485396 | orchestrator | 2025-09-27 21:33:39 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:33:39.485967 | orchestrator | 2025-09-27 21:33:39 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:33:39.486535 | orchestrator | 2025-09-27 21:33:39 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:33:39.487149 | orchestrator | 2025-09-27 21:33:39 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:33:39.487173 | orchestrator | 2025-09-27 21:33:39 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:33:42.659999 | orchestrator | 2025-09-27 21:33:42 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:33:42.660081 | orchestrator | 2025-09-27 21:33:42 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:33:42.660120 | orchestrator | 2025-09-27 21:33:42 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:33:42.660132 | orchestrator | 2025-09-27 21:33:42 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:33:42.660143 | orchestrator | 2025-09-27 21:33:42 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:33:42.660154 | orchestrator | 2025-09-27 21:33:42 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:33:45.988667 | orchestrator | 2025-09-27 21:33:45 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:33:45.989429 | orchestrator | 2025-09-27 21:33:45 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state STARTED 2025-09-27 21:33:45.990801 | orchestrator | 2025-09-27 21:33:45 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:33:45.992443 | orchestrator | 2025-09-27 21:33:45 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:33:45.993509 | orchestrator | 2025-09-27 21:33:45 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:33:45.993536 | orchestrator | 2025-09-27 21:33:45 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:33:49.122382 | orchestrator | 2025-09-27 21:33:49 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:33:49.124691 | orchestrator | 2025-09-27 21:33:49 | INFO  | Task 8d6a357e-88c4-4fde-bda8-37e4d1668a25 is in state STARTED 2025-09-27 21:33:49.127897 | orchestrator | 2025-09-27 21:33:49 | INFO  | Task 7a15152f-cd5a-41f4-ad34-0411fecfb99e is in state SUCCESS 2025-09-27 21:33:49.129758 | orchestrator | 2025-09-27 21:33:49.129798 | orchestrator | 2025-09-27 21:33:49.129811 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-09-27 21:33:49.129822 | orchestrator | 2025-09-27 21:33:49.129834 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-09-27 21:33:49.129845 | orchestrator | Saturday 27 September 2025 21:30:14 +0000 (0:00:00.232) 0:00:00.232 **** 2025-09-27 21:33:49.129857 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:33:49.129868 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:33:49.129879 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:33:49.129890 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:33:49.129917 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:33:49.129929 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:33:49.129941 | orchestrator | 2025-09-27 21:33:49.129952 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-09-27 21:33:49.129965 | orchestrator | Saturday 27 September 2025 21:30:14 +0000 (0:00:00.747) 0:00:00.980 **** 2025-09-27 21:33:49.129976 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:33:49.129988 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:33:49.129999 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:33:49.130010 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.130073 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:33:49.130086 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:33:49.130097 | orchestrator | 2025-09-27 21:33:49.130109 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-09-27 21:33:49.130121 | orchestrator | Saturday 27 September 2025 21:30:15 +0000 (0:00:00.560) 0:00:01.540 **** 2025-09-27 21:33:49.130132 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:33:49.130144 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:33:49.130155 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:33:49.130167 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.130178 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:33:49.130190 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:33:49.130202 | orchestrator | 2025-09-27 21:33:49.130213 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-09-27 21:33:49.130248 | orchestrator | Saturday 27 September 2025 21:30:16 +0000 (0:00:00.738) 0:00:02.278 **** 2025-09-27 21:33:49.130261 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:33:49.130272 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:33:49.130282 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:33:49.130293 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:33:49.130304 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:33:49.130315 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:33:49.130326 | orchestrator | 2025-09-27 21:33:49.130338 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-09-27 21:33:49.130350 | orchestrator | Saturday 27 September 2025 21:30:18 +0000 (0:00:01.836) 0:00:04.115 **** 2025-09-27 21:33:49.130362 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:33:49.130375 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:33:49.130386 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:33:49.130399 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:33:49.130411 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:33:49.130422 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:33:49.130434 | orchestrator | 2025-09-27 21:33:49.130446 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-09-27 21:33:49.130458 | orchestrator | Saturday 27 September 2025 21:30:19 +0000 (0:00:01.832) 0:00:05.948 **** 2025-09-27 21:33:49.130471 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:33:49.130483 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:33:49.130494 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:33:49.130506 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:33:49.130518 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:33:49.130529 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:33:49.130541 | orchestrator | 2025-09-27 21:33:49.130584 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-09-27 21:33:49.130596 | orchestrator | Saturday 27 September 2025 21:30:20 +0000 (0:00:01.112) 0:00:07.060 **** 2025-09-27 21:33:49.130608 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:33:49.130620 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:33:49.130632 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:33:49.130643 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.130655 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:33:49.130667 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:33:49.130679 | orchestrator | 2025-09-27 21:33:49.130691 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-09-27 21:33:49.130702 | orchestrator | Saturday 27 September 2025 21:30:21 +0000 (0:00:00.740) 0:00:07.800 **** 2025-09-27 21:33:49.130713 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:33:49.130724 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:33:49.130734 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:33:49.130745 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.130756 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:33:49.130766 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:33:49.130777 | orchestrator | 2025-09-27 21:33:49.130788 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-09-27 21:33:49.130799 | orchestrator | Saturday 27 September 2025 21:30:22 +0000 (0:00:00.768) 0:00:08.569 **** 2025-09-27 21:33:49.130810 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-27 21:33:49.130821 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-27 21:33:49.130832 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:33:49.130843 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-27 21:33:49.130853 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-27 21:33:49.130864 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:33:49.130875 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-27 21:33:49.130886 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-27 21:33:49.130909 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:33:49.130925 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-27 21:33:49.130950 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-27 21:33:49.130962 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.130973 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-27 21:33:49.130984 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-27 21:33:49.130994 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:33:49.131005 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-27 21:33:49.131016 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-27 21:33:49.131027 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:33:49.131038 | orchestrator | 2025-09-27 21:33:49.131048 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-09-27 21:33:49.131059 | orchestrator | Saturday 27 September 2025 21:30:23 +0000 (0:00:00.646) 0:00:09.215 **** 2025-09-27 21:33:49.131070 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:33:49.131081 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:33:49.131092 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:33:49.131103 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.131114 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:33:49.131124 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:33:49.131135 | orchestrator | 2025-09-27 21:33:49.131146 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-09-27 21:33:49.131158 | orchestrator | Saturday 27 September 2025 21:30:24 +0000 (0:00:01.299) 0:00:10.515 **** 2025-09-27 21:33:49.131169 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:33:49.131180 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:33:49.131190 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:33:49.131201 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:33:49.131212 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:33:49.131223 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:33:49.131233 | orchestrator | 2025-09-27 21:33:49.131244 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-09-27 21:33:49.131255 | orchestrator | Saturday 27 September 2025 21:30:25 +0000 (0:00:01.247) 0:00:11.762 **** 2025-09-27 21:33:49.131266 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:33:49.131277 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:33:49.131287 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:33:49.131298 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:33:49.131309 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:33:49.131319 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:33:49.131330 | orchestrator | 2025-09-27 21:33:49.131341 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-09-27 21:33:49.131352 | orchestrator | Saturday 27 September 2025 21:30:31 +0000 (0:00:06.316) 0:00:18.079 **** 2025-09-27 21:33:49.131362 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:33:49.131373 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:33:49.131384 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:33:49.131395 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.131406 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:33:49.131417 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:33:49.131427 | orchestrator | 2025-09-27 21:33:49.131438 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-09-27 21:33:49.131449 | orchestrator | Saturday 27 September 2025 21:30:33 +0000 (0:00:01.722) 0:00:19.802 **** 2025-09-27 21:33:49.131460 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:33:49.131471 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:33:49.131481 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:33:49.131498 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.131509 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:33:49.131520 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:33:49.131531 | orchestrator | 2025-09-27 21:33:49.131542 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-09-27 21:33:49.131569 | orchestrator | Saturday 27 September 2025 21:30:35 +0000 (0:00:02.020) 0:00:21.822 **** 2025-09-27 21:33:49.131580 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:33:49.131591 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:33:49.131602 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:33:49.131612 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:33:49.131623 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:33:49.131634 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:33:49.131645 | orchestrator | 2025-09-27 21:33:49.131656 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-09-27 21:33:49.131667 | orchestrator | Saturday 27 September 2025 21:30:36 +0000 (0:00:00.891) 0:00:22.714 **** 2025-09-27 21:33:49.131678 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-09-27 21:33:49.131689 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-09-27 21:33:49.131700 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-09-27 21:33:49.131711 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-09-27 21:33:49.131721 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-09-27 21:33:49.131732 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-09-27 21:33:49.131743 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-09-27 21:33:49.131754 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-09-27 21:33:49.131764 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-09-27 21:33:49.131775 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-09-27 21:33:49.131786 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-09-27 21:33:49.131796 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-09-27 21:33:49.131807 | orchestrator | 2025-09-27 21:33:49.131818 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-09-27 21:33:49.131829 | orchestrator | Saturday 27 September 2025 21:30:38 +0000 (0:00:01.667) 0:00:24.382 **** 2025-09-27 21:33:49.131840 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:33:49.131850 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:33:49.131861 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:33:49.131876 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:33:49.131887 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:33:49.131898 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:33:49.131909 | orchestrator | 2025-09-27 21:33:49.131925 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-09-27 21:33:49.131937 | orchestrator | 2025-09-27 21:33:49.131948 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-09-27 21:33:49.131959 | orchestrator | Saturday 27 September 2025 21:30:40 +0000 (0:00:01.767) 0:00:26.149 **** 2025-09-27 21:33:49.131970 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:33:49.131980 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:33:49.131991 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:33:49.132002 | orchestrator | 2025-09-27 21:33:49.132013 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-09-27 21:33:49.132024 | orchestrator | Saturday 27 September 2025 21:30:41 +0000 (0:00:01.199) 0:00:27.349 **** 2025-09-27 21:33:49.132035 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:33:49.132046 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:33:49.132056 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:33:49.132067 | orchestrator | 2025-09-27 21:33:49.132226 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-09-27 21:33:49.132243 | orchestrator | Saturday 27 September 2025 21:30:42 +0000 (0:00:01.307) 0:00:28.657 **** 2025-09-27 21:33:49.132254 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:33:49.132274 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:33:49.132285 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:33:49.132296 | orchestrator | 2025-09-27 21:33:49.132307 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-09-27 21:33:49.132318 | orchestrator | Saturday 27 September 2025 21:30:43 +0000 (0:00:01.099) 0:00:29.756 **** 2025-09-27 21:33:49.132329 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:33:49.132339 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:33:49.132350 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:33:49.132361 | orchestrator | 2025-09-27 21:33:49.132372 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-09-27 21:33:49.132382 | orchestrator | Saturday 27 September 2025 21:30:44 +0000 (0:00:00.877) 0:00:30.633 **** 2025-09-27 21:33:49.132393 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.132404 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:33:49.132415 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:33:49.132426 | orchestrator | 2025-09-27 21:33:49.132437 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-09-27 21:33:49.132447 | orchestrator | Saturday 27 September 2025 21:30:44 +0000 (0:00:00.288) 0:00:30.922 **** 2025-09-27 21:33:49.132458 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:33:49.132469 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:33:49.132480 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:33:49.132491 | orchestrator | 2025-09-27 21:33:49.132502 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-09-27 21:33:49.132513 | orchestrator | Saturday 27 September 2025 21:30:45 +0000 (0:00:00.647) 0:00:31.569 **** 2025-09-27 21:33:49.132524 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:33:49.132534 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:33:49.132563 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:33:49.132575 | orchestrator | 2025-09-27 21:33:49.132586 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-09-27 21:33:49.132597 | orchestrator | Saturday 27 September 2025 21:30:46 +0000 (0:00:01.366) 0:00:32.936 **** 2025-09-27 21:33:49.132608 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:33:49.132619 | orchestrator | 2025-09-27 21:33:49.132630 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-09-27 21:33:49.132640 | orchestrator | Saturday 27 September 2025 21:30:47 +0000 (0:00:00.611) 0:00:33.547 **** 2025-09-27 21:33:49.132651 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:33:49.132662 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:33:49.132673 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:33:49.132683 | orchestrator | 2025-09-27 21:33:49.132694 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-09-27 21:33:49.132705 | orchestrator | Saturday 27 September 2025 21:30:48 +0000 (0:00:01.342) 0:00:34.889 **** 2025-09-27 21:33:49.132716 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:33:49.132727 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:33:49.132738 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:33:49.132748 | orchestrator | 2025-09-27 21:33:49.132760 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-09-27 21:33:49.132771 | orchestrator | Saturday 27 September 2025 21:30:49 +0000 (0:00:00.864) 0:00:35.754 **** 2025-09-27 21:33:49.132782 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:33:49.132792 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:33:49.132803 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:33:49.132814 | orchestrator | 2025-09-27 21:33:49.132825 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-09-27 21:33:49.132835 | orchestrator | Saturday 27 September 2025 21:30:51 +0000 (0:00:01.447) 0:00:37.201 **** 2025-09-27 21:33:49.132846 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:33:49.132857 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:33:49.132868 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:33:49.132884 | orchestrator | 2025-09-27 21:33:49.132895 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-09-27 21:33:49.132906 | orchestrator | Saturday 27 September 2025 21:30:53 +0000 (0:00:02.084) 0:00:39.285 **** 2025-09-27 21:33:49.132917 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.132928 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:33:49.132939 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:33:49.132949 | orchestrator | 2025-09-27 21:33:49.132960 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-09-27 21:33:49.132971 | orchestrator | Saturday 27 September 2025 21:30:53 +0000 (0:00:00.297) 0:00:39.583 **** 2025-09-27 21:33:49.132982 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.132993 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:33:49.133003 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:33:49.133014 | orchestrator | 2025-09-27 21:33:49.133025 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-09-27 21:33:49.133042 | orchestrator | Saturday 27 September 2025 21:30:53 +0000 (0:00:00.329) 0:00:39.912 **** 2025-09-27 21:33:49.133053 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:33:49.133064 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:33:49.133074 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:33:49.133085 | orchestrator | 2025-09-27 21:33:49.133105 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-09-27 21:33:49.133117 | orchestrator | Saturday 27 September 2025 21:30:55 +0000 (0:00:01.719) 0:00:41.631 **** 2025-09-27 21:33:49.133129 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-27 21:33:49.133140 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-27 21:33:49.133151 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-27 21:33:49.133163 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-27 21:33:49.133174 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-27 21:33:49.133185 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-27 21:33:49.133196 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-27 21:33:49.133207 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-27 21:33:49.133218 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-27 21:33:49.133229 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-27 21:33:49.133240 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-27 21:33:49.133250 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-27 21:33:49.133261 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-27 21:33:49.133272 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-27 21:33:49.133283 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-27 21:33:49.133300 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:33:49.133311 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:33:49.133322 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:33:49.133333 | orchestrator | 2025-09-27 21:33:49.133344 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-09-27 21:33:49.133355 | orchestrator | Saturday 27 September 2025 21:31:51 +0000 (0:00:55.659) 0:01:37.291 **** 2025-09-27 21:33:49.133366 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.133377 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:33:49.133387 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:33:49.133398 | orchestrator | 2025-09-27 21:33:49.133409 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-09-27 21:33:49.133420 | orchestrator | Saturday 27 September 2025 21:31:51 +0000 (0:00:00.297) 0:01:37.588 **** 2025-09-27 21:33:49.133431 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:33:49.133442 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:33:49.133453 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:33:49.133463 | orchestrator | 2025-09-27 21:33:49.133474 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-09-27 21:33:49.133485 | orchestrator | Saturday 27 September 2025 21:31:52 +0000 (0:00:01.029) 0:01:38.618 **** 2025-09-27 21:33:49.133496 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:33:49.133507 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:33:49.133518 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:33:49.133528 | orchestrator | 2025-09-27 21:33:49.133539 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-09-27 21:33:49.133565 | orchestrator | Saturday 27 September 2025 21:31:53 +0000 (0:00:01.216) 0:01:39.834 **** 2025-09-27 21:33:49.133577 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:33:49.133588 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:33:49.133598 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:33:49.133609 | orchestrator | 2025-09-27 21:33:49.133620 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-09-27 21:33:49.133631 | orchestrator | Saturday 27 September 2025 21:32:20 +0000 (0:00:26.594) 0:02:06.428 **** 2025-09-27 21:33:49.133642 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:33:49.133653 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:33:49.133663 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:33:49.133674 | orchestrator | 2025-09-27 21:33:49.133685 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-09-27 21:33:49.133696 | orchestrator | Saturday 27 September 2025 21:32:21 +0000 (0:00:00.712) 0:02:07.141 **** 2025-09-27 21:33:49.133712 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:33:49.133723 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:33:49.133734 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:33:49.133744 | orchestrator | 2025-09-27 21:33:49.133760 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-09-27 21:33:49.133772 | orchestrator | Saturday 27 September 2025 21:32:21 +0000 (0:00:00.741) 0:02:07.883 **** 2025-09-27 21:33:49.133783 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:33:49.133794 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:33:49.133804 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:33:49.133815 | orchestrator | 2025-09-27 21:33:49.133826 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-09-27 21:33:49.133837 | orchestrator | Saturday 27 September 2025 21:32:22 +0000 (0:00:00.664) 0:02:08.547 **** 2025-09-27 21:33:49.133848 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:33:49.133858 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:33:49.133869 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:33:49.133880 | orchestrator | 2025-09-27 21:33:49.133891 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-09-27 21:33:49.133902 | orchestrator | Saturday 27 September 2025 21:32:23 +0000 (0:00:00.908) 0:02:09.455 **** 2025-09-27 21:33:49.133913 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:33:49.133930 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:33:49.133941 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:33:49.133951 | orchestrator | 2025-09-27 21:33:49.133962 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-09-27 21:33:49.133973 | orchestrator | Saturday 27 September 2025 21:32:23 +0000 (0:00:00.279) 0:02:09.735 **** 2025-09-27 21:33:49.133984 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:33:49.133995 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:33:49.134006 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:33:49.134042 | orchestrator | 2025-09-27 21:33:49.134055 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-09-27 21:33:49.134066 | orchestrator | Saturday 27 September 2025 21:32:24 +0000 (0:00:00.788) 0:02:10.523 **** 2025-09-27 21:33:49.134077 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:33:49.134088 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:33:49.134099 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:33:49.134109 | orchestrator | 2025-09-27 21:33:49.134120 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-09-27 21:33:49.134131 | orchestrator | Saturday 27 September 2025 21:32:25 +0000 (0:00:00.765) 0:02:11.289 **** 2025-09-27 21:33:49.134142 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:33:49.134153 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:33:49.134164 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:33:49.134175 | orchestrator | 2025-09-27 21:33:49.134185 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-09-27 21:33:49.134196 | orchestrator | Saturday 27 September 2025 21:32:26 +0000 (0:00:01.297) 0:02:12.587 **** 2025-09-27 21:33:49.134207 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:33:49.134218 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:33:49.134229 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:33:49.134239 | orchestrator | 2025-09-27 21:33:49.134250 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-09-27 21:33:49.134261 | orchestrator | Saturday 27 September 2025 21:32:27 +0000 (0:00:00.932) 0:02:13.519 **** 2025-09-27 21:33:49.134272 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.134283 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:33:49.134294 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:33:49.134305 | orchestrator | 2025-09-27 21:33:49.134315 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-09-27 21:33:49.134326 | orchestrator | Saturday 27 September 2025 21:32:27 +0000 (0:00:00.317) 0:02:13.836 **** 2025-09-27 21:33:49.134337 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.134348 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:33:49.134359 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:33:49.134369 | orchestrator | 2025-09-27 21:33:49.134380 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-09-27 21:33:49.134391 | orchestrator | Saturday 27 September 2025 21:32:28 +0000 (0:00:00.316) 0:02:14.152 **** 2025-09-27 21:33:49.134402 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:33:49.134413 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:33:49.134423 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:33:49.134434 | orchestrator | 2025-09-27 21:33:49.134445 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-09-27 21:33:49.134456 | orchestrator | Saturday 27 September 2025 21:32:29 +0000 (0:00:00.984) 0:02:15.136 **** 2025-09-27 21:33:49.134467 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:33:49.134478 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:33:49.134488 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:33:49.134499 | orchestrator | 2025-09-27 21:33:49.134510 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-09-27 21:33:49.134521 | orchestrator | Saturday 27 September 2025 21:32:29 +0000 (0:00:00.685) 0:02:15.822 **** 2025-09-27 21:33:49.134532 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-27 21:33:49.134587 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-27 21:33:49.134599 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-27 21:33:49.134610 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-27 21:33:49.134621 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-27 21:33:49.134632 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-27 21:33:49.134643 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-27 21:33:49.134654 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-27 21:33:49.134670 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-09-27 21:33:49.134688 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-27 21:33:49.134699 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-27 21:33:49.134710 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-09-27 21:33:49.134721 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-27 21:33:49.134732 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-27 21:33:49.134743 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-27 21:33:49.134754 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-27 21:33:49.134764 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-27 21:33:49.134776 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-27 21:33:49.134787 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-27 21:33:49.134798 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-27 21:33:49.134809 | orchestrator | 2025-09-27 21:33:49.134820 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-09-27 21:33:49.134831 | orchestrator | 2025-09-27 21:33:49.134841 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-09-27 21:33:49.134852 | orchestrator | Saturday 27 September 2025 21:32:32 +0000 (0:00:03.219) 0:02:19.041 **** 2025-09-27 21:33:49.134863 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:33:49.134874 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:33:49.134885 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:33:49.134895 | orchestrator | 2025-09-27 21:33:49.134906 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-09-27 21:33:49.134917 | orchestrator | Saturday 27 September 2025 21:32:33 +0000 (0:00:00.467) 0:02:19.508 **** 2025-09-27 21:33:49.134928 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:33:49.134939 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:33:49.134950 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:33:49.134960 | orchestrator | 2025-09-27 21:33:49.134971 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-09-27 21:33:49.134982 | orchestrator | Saturday 27 September 2025 21:32:33 +0000 (0:00:00.597) 0:02:20.106 **** 2025-09-27 21:33:49.134993 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:33:49.135004 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:33:49.135015 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:33:49.135025 | orchestrator | 2025-09-27 21:33:49.135036 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-09-27 21:33:49.135053 | orchestrator | Saturday 27 September 2025 21:32:34 +0000 (0:00:00.333) 0:02:20.440 **** 2025-09-27 21:33:49.135064 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:33:49.135075 | orchestrator | 2025-09-27 21:33:49.135086 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-09-27 21:33:49.135097 | orchestrator | Saturday 27 September 2025 21:32:34 +0000 (0:00:00.606) 0:02:21.047 **** 2025-09-27 21:33:49.135108 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:33:49.135119 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:33:49.135130 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:33:49.135141 | orchestrator | 2025-09-27 21:33:49.135151 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-09-27 21:33:49.135162 | orchestrator | Saturday 27 September 2025 21:32:35 +0000 (0:00:00.283) 0:02:21.331 **** 2025-09-27 21:33:49.135173 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:33:49.135184 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:33:49.135195 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:33:49.135206 | orchestrator | 2025-09-27 21:33:49.135217 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-09-27 21:33:49.135228 | orchestrator | Saturday 27 September 2025 21:32:35 +0000 (0:00:00.321) 0:02:21.652 **** 2025-09-27 21:33:49.135239 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:33:49.135393 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:33:49.135408 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:33:49.135420 | orchestrator | 2025-09-27 21:33:49.135430 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-09-27 21:33:49.135441 | orchestrator | Saturday 27 September 2025 21:32:35 +0000 (0:00:00.259) 0:02:21.912 **** 2025-09-27 21:33:49.135452 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:33:49.135463 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:33:49.135474 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:33:49.135485 | orchestrator | 2025-09-27 21:33:49.135496 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-09-27 21:33:49.135507 | orchestrator | Saturday 27 September 2025 21:32:36 +0000 (0:00:00.742) 0:02:22.654 **** 2025-09-27 21:33:49.135518 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:33:49.135529 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:33:49.135540 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:33:49.135604 | orchestrator | 2025-09-27 21:33:49.135615 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-09-27 21:33:49.135626 | orchestrator | Saturday 27 September 2025 21:32:37 +0000 (0:00:01.253) 0:02:23.907 **** 2025-09-27 21:33:49.135637 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:33:49.135648 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:33:49.135659 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:33:49.135669 | orchestrator | 2025-09-27 21:33:49.135680 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-09-27 21:33:49.135698 | orchestrator | Saturday 27 September 2025 21:32:39 +0000 (0:00:01.323) 0:02:25.231 **** 2025-09-27 21:33:49.135709 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:33:49.135720 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:33:49.135731 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:33:49.135742 | orchestrator | 2025-09-27 21:33:49.135760 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-27 21:33:49.135772 | orchestrator | 2025-09-27 21:33:49.135783 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-27 21:33:49.135794 | orchestrator | Saturday 27 September 2025 21:32:51 +0000 (0:00:12.562) 0:02:37.794 **** 2025-09-27 21:33:49.135804 | orchestrator | ok: [testbed-manager] 2025-09-27 21:33:49.135815 | orchestrator | 2025-09-27 21:33:49.135826 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-27 21:33:49.135837 | orchestrator | Saturday 27 September 2025 21:32:52 +0000 (0:00:00.725) 0:02:38.519 **** 2025-09-27 21:33:49.135848 | orchestrator | changed: [testbed-manager] 2025-09-27 21:33:49.135867 | orchestrator | 2025-09-27 21:33:49.135921 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-27 21:33:49.136100 | orchestrator | Saturday 27 September 2025 21:32:52 +0000 (0:00:00.379) 0:02:38.899 **** 2025-09-27 21:33:49.136117 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-27 21:33:49.136127 | orchestrator | 2025-09-27 21:33:49.136138 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-27 21:33:49.136148 | orchestrator | Saturday 27 September 2025 21:32:53 +0000 (0:00:00.576) 0:02:39.475 **** 2025-09-27 21:33:49.136158 | orchestrator | changed: [testbed-manager] 2025-09-27 21:33:49.136168 | orchestrator | 2025-09-27 21:33:49.136178 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-27 21:33:49.136188 | orchestrator | Saturday 27 September 2025 21:32:54 +0000 (0:00:00.739) 0:02:40.215 **** 2025-09-27 21:33:49.136198 | orchestrator | changed: [testbed-manager] 2025-09-27 21:33:49.136208 | orchestrator | 2025-09-27 21:33:49.136218 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-27 21:33:49.136228 | orchestrator | Saturday 27 September 2025 21:32:54 +0000 (0:00:00.495) 0:02:40.711 **** 2025-09-27 21:33:49.136238 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-27 21:33:49.136248 | orchestrator | 2025-09-27 21:33:49.136258 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-27 21:33:49.136268 | orchestrator | Saturday 27 September 2025 21:32:55 +0000 (0:00:01.379) 0:02:42.090 **** 2025-09-27 21:33:49.136278 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-27 21:33:49.136288 | orchestrator | 2025-09-27 21:33:49.136298 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-27 21:33:49.136308 | orchestrator | Saturday 27 September 2025 21:32:56 +0000 (0:00:00.738) 0:02:42.828 **** 2025-09-27 21:33:49.136318 | orchestrator | changed: [testbed-manager] 2025-09-27 21:33:49.136328 | orchestrator | 2025-09-27 21:33:49.136337 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-27 21:33:49.136348 | orchestrator | Saturday 27 September 2025 21:32:57 +0000 (0:00:00.457) 0:02:43.286 **** 2025-09-27 21:33:49.136357 | orchestrator | changed: [testbed-manager] 2025-09-27 21:33:49.136367 | orchestrator | 2025-09-27 21:33:49.136377 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-09-27 21:33:49.136387 | orchestrator | 2025-09-27 21:33:49.136397 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-09-27 21:33:49.136408 | orchestrator | Saturday 27 September 2025 21:32:57 +0000 (0:00:00.599) 0:02:43.885 **** 2025-09-27 21:33:49.136417 | orchestrator | ok: [testbed-manager] 2025-09-27 21:33:49.136428 | orchestrator | 2025-09-27 21:33:49.136437 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-09-27 21:33:49.136448 | orchestrator | Saturday 27 September 2025 21:32:57 +0000 (0:00:00.142) 0:02:44.028 **** 2025-09-27 21:33:49.136458 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-09-27 21:33:49.136468 | orchestrator | 2025-09-27 21:33:49.136478 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-09-27 21:33:49.136488 | orchestrator | Saturday 27 September 2025 21:32:58 +0000 (0:00:00.270) 0:02:44.298 **** 2025-09-27 21:33:49.136497 | orchestrator | ok: [testbed-manager] 2025-09-27 21:33:49.136508 | orchestrator | 2025-09-27 21:33:49.136517 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-09-27 21:33:49.136527 | orchestrator | Saturday 27 September 2025 21:32:59 +0000 (0:00:00.920) 0:02:45.219 **** 2025-09-27 21:33:49.136537 | orchestrator | ok: [testbed-manager] 2025-09-27 21:33:49.136562 | orchestrator | 2025-09-27 21:33:49.136572 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-09-27 21:33:49.136582 | orchestrator | Saturday 27 September 2025 21:33:00 +0000 (0:00:01.530) 0:02:46.750 **** 2025-09-27 21:33:49.136592 | orchestrator | changed: [testbed-manager] 2025-09-27 21:33:49.136610 | orchestrator | 2025-09-27 21:33:49.136620 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-09-27 21:33:49.136629 | orchestrator | Saturday 27 September 2025 21:33:01 +0000 (0:00:00.646) 0:02:47.396 **** 2025-09-27 21:33:49.136639 | orchestrator | ok: [testbed-manager] 2025-09-27 21:33:49.136649 | orchestrator | 2025-09-27 21:33:49.136658 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-09-27 21:33:49.136668 | orchestrator | Saturday 27 September 2025 21:33:01 +0000 (0:00:00.355) 0:02:47.751 **** 2025-09-27 21:33:49.136678 | orchestrator | changed: [testbed-manager] 2025-09-27 21:33:49.136687 | orchestrator | 2025-09-27 21:33:49.136697 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-09-27 21:33:49.136707 | orchestrator | Saturday 27 September 2025 21:33:07 +0000 (0:00:06.332) 0:02:54.083 **** 2025-09-27 21:33:49.136716 | orchestrator | changed: [testbed-manager] 2025-09-27 21:33:49.136726 | orchestrator | 2025-09-27 21:33:49.136737 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-09-27 21:33:49.136747 | orchestrator | Saturday 27 September 2025 21:33:18 +0000 (0:00:10.874) 0:03:04.958 **** 2025-09-27 21:33:49.136757 | orchestrator | ok: [testbed-manager] 2025-09-27 21:33:49.136768 | orchestrator | 2025-09-27 21:33:49.136779 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-09-27 21:33:49.136789 | orchestrator | 2025-09-27 21:33:49.136805 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-09-27 21:33:49.136823 | orchestrator | Saturday 27 September 2025 21:33:19 +0000 (0:00:00.521) 0:03:05.479 **** 2025-09-27 21:33:49.136834 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:33:49.136845 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:33:49.136855 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:33:49.136866 | orchestrator | 2025-09-27 21:33:49.136877 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-09-27 21:33:49.136888 | orchestrator | Saturday 27 September 2025 21:33:19 +0000 (0:00:00.258) 0:03:05.738 **** 2025-09-27 21:33:49.136899 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.136910 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:33:49.136921 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:33:49.136932 | orchestrator | 2025-09-27 21:33:49.136942 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-09-27 21:33:49.136952 | orchestrator | Saturday 27 September 2025 21:33:19 +0000 (0:00:00.323) 0:03:06.061 **** 2025-09-27 21:33:49.136963 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:33:49.136974 | orchestrator | 2025-09-27 21:33:49.136985 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-09-27 21:33:49.136995 | orchestrator | Saturday 27 September 2025 21:33:20 +0000 (0:00:00.647) 0:03:06.709 **** 2025-09-27 21:33:49.137006 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.137016 | orchestrator | 2025-09-27 21:33:49.137027 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-09-27 21:33:49.137037 | orchestrator | Saturday 27 September 2025 21:33:20 +0000 (0:00:00.178) 0:03:06.887 **** 2025-09-27 21:33:49.137048 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.137058 | orchestrator | 2025-09-27 21:33:49.137069 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-09-27 21:33:49.137079 | orchestrator | Saturday 27 September 2025 21:33:20 +0000 (0:00:00.201) 0:03:07.088 **** 2025-09-27 21:33:49.137090 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.137100 | orchestrator | 2025-09-27 21:33:49.137109 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-09-27 21:33:49.137119 | orchestrator | Saturday 27 September 2025 21:33:21 +0000 (0:00:00.186) 0:03:07.275 **** 2025-09-27 21:33:49.137128 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.137138 | orchestrator | 2025-09-27 21:33:49.137148 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-09-27 21:33:49.137164 | orchestrator | Saturday 27 September 2025 21:33:21 +0000 (0:00:00.195) 0:03:07.471 **** 2025-09-27 21:33:49.137174 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.137184 | orchestrator | 2025-09-27 21:33:49.137193 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-09-27 21:33:49.137203 | orchestrator | Saturday 27 September 2025 21:33:21 +0000 (0:00:00.183) 0:03:07.654 **** 2025-09-27 21:33:49.137212 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.137222 | orchestrator | 2025-09-27 21:33:49.137232 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-09-27 21:33:49.137241 | orchestrator | Saturday 27 September 2025 21:33:21 +0000 (0:00:00.185) 0:03:07.840 **** 2025-09-27 21:33:49.137251 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.137260 | orchestrator | 2025-09-27 21:33:49.137270 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-09-27 21:33:49.137279 | orchestrator | Saturday 27 September 2025 21:33:21 +0000 (0:00:00.207) 0:03:08.047 **** 2025-09-27 21:33:49.137289 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.137299 | orchestrator | 2025-09-27 21:33:49.137308 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-09-27 21:33:49.137318 | orchestrator | Saturday 27 September 2025 21:33:22 +0000 (0:00:00.185) 0:03:08.233 **** 2025-09-27 21:33:49.137328 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.137338 | orchestrator | 2025-09-27 21:33:49.137347 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-09-27 21:33:49.137357 | orchestrator | Saturday 27 September 2025 21:33:22 +0000 (0:00:00.183) 0:03:08.417 **** 2025-09-27 21:33:49.137366 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-09-27 21:33:49.137376 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-09-27 21:33:49.137386 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.137395 | orchestrator | 2025-09-27 21:33:49.137405 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-09-27 21:33:49.137414 | orchestrator | Saturday 27 September 2025 21:33:22 +0000 (0:00:00.532) 0:03:08.950 **** 2025-09-27 21:33:49.137424 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.137434 | orchestrator | 2025-09-27 21:33:49.137443 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-09-27 21:33:49.137453 | orchestrator | Saturday 27 September 2025 21:33:22 +0000 (0:00:00.151) 0:03:09.101 **** 2025-09-27 21:33:49.137462 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.137472 | orchestrator | 2025-09-27 21:33:49.137481 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-09-27 21:33:49.137491 | orchestrator | Saturday 27 September 2025 21:33:23 +0000 (0:00:00.209) 0:03:09.310 **** 2025-09-27 21:33:49.137501 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.137510 | orchestrator | 2025-09-27 21:33:49.137520 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-09-27 21:33:49.137529 | orchestrator | Saturday 27 September 2025 21:33:23 +0000 (0:00:00.290) 0:03:09.601 **** 2025-09-27 21:33:49.137539 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.137562 | orchestrator | 2025-09-27 21:33:49.137572 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-09-27 21:33:49.137582 | orchestrator | Saturday 27 September 2025 21:33:23 +0000 (0:00:00.162) 0:03:09.763 **** 2025-09-27 21:33:49.137591 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.137601 | orchestrator | 2025-09-27 21:33:49.137611 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-09-27 21:33:49.137625 | orchestrator | Saturday 27 September 2025 21:33:23 +0000 (0:00:00.151) 0:03:09.915 **** 2025-09-27 21:33:49.137634 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.137644 | orchestrator | 2025-09-27 21:33:49.137654 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-09-27 21:33:49.137668 | orchestrator | Saturday 27 September 2025 21:33:23 +0000 (0:00:00.155) 0:03:10.071 **** 2025-09-27 21:33:49.137684 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.137694 | orchestrator | 2025-09-27 21:33:49.137704 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-09-27 21:33:49.137714 | orchestrator | Saturday 27 September 2025 21:33:24 +0000 (0:00:00.169) 0:03:10.240 **** 2025-09-27 21:33:49.137723 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.137733 | orchestrator | 2025-09-27 21:33:49.137743 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-09-27 21:33:49.137752 | orchestrator | Saturday 27 September 2025 21:33:24 +0000 (0:00:00.160) 0:03:10.401 **** 2025-09-27 21:33:49.137762 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.137772 | orchestrator | 2025-09-27 21:33:49.137781 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-09-27 21:33:49.137791 | orchestrator | Saturday 27 September 2025 21:33:24 +0000 (0:00:00.195) 0:03:10.596 **** 2025-09-27 21:33:49.137801 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.137810 | orchestrator | 2025-09-27 21:33:49.137820 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-09-27 21:33:49.137830 | orchestrator | Saturday 27 September 2025 21:33:24 +0000 (0:00:00.154) 0:03:10.751 **** 2025-09-27 21:33:49.137839 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.137849 | orchestrator | 2025-09-27 21:33:49.137859 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-09-27 21:33:49.137869 | orchestrator | Saturday 27 September 2025 21:33:24 +0000 (0:00:00.161) 0:03:10.912 **** 2025-09-27 21:33:49.137878 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-09-27 21:33:49.137888 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-09-27 21:33:49.137898 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-09-27 21:33:49.137907 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-09-27 21:33:49.137917 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.137926 | orchestrator | 2025-09-27 21:33:49.137936 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-09-27 21:33:49.137946 | orchestrator | Saturday 27 September 2025 21:33:25 +0000 (0:00:00.613) 0:03:11.526 **** 2025-09-27 21:33:49.137955 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.137965 | orchestrator | 2025-09-27 21:33:49.137975 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-09-27 21:33:49.137984 | orchestrator | Saturday 27 September 2025 21:33:25 +0000 (0:00:00.196) 0:03:11.722 **** 2025-09-27 21:33:49.137994 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.138004 | orchestrator | 2025-09-27 21:33:49.138013 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-09-27 21:33:49.138048 | orchestrator | Saturday 27 September 2025 21:33:25 +0000 (0:00:00.177) 0:03:11.899 **** 2025-09-27 21:33:49.138058 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.138067 | orchestrator | 2025-09-27 21:33:49.138077 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-09-27 21:33:49.138087 | orchestrator | Saturday 27 September 2025 21:33:26 +0000 (0:00:00.247) 0:03:12.146 **** 2025-09-27 21:33:49.138097 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.138106 | orchestrator | 2025-09-27 21:33:49.138116 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-09-27 21:33:49.138126 | orchestrator | Saturday 27 September 2025 21:33:26 +0000 (0:00:00.182) 0:03:12.329 **** 2025-09-27 21:33:49.138135 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-09-27 21:33:49.138145 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-09-27 21:33:49.138155 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.138165 | orchestrator | 2025-09-27 21:33:49.138174 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-09-27 21:33:49.138184 | orchestrator | Saturday 27 September 2025 21:33:26 +0000 (0:00:00.236) 0:03:12.565 **** 2025-09-27 21:33:49.138202 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.138212 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:33:49.138221 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:33:49.138231 | orchestrator | 2025-09-27 21:33:49.138240 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-09-27 21:33:49.138250 | orchestrator | Saturday 27 September 2025 21:33:26 +0000 (0:00:00.415) 0:03:12.981 **** 2025-09-27 21:33:49.138259 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:33:49.138269 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:33:49.138279 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:33:49.138288 | orchestrator | 2025-09-27 21:33:49.138298 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-09-27 21:33:49.138308 | orchestrator | 2025-09-27 21:33:49.138317 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-09-27 21:33:49.138327 | orchestrator | Saturday 27 September 2025 21:33:28 +0000 (0:00:01.152) 0:03:14.134 **** 2025-09-27 21:33:49.138336 | orchestrator | ok: [testbed-manager] 2025-09-27 21:33:49.138346 | orchestrator | 2025-09-27 21:33:49.138356 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-09-27 21:33:49.138365 | orchestrator | Saturday 27 September 2025 21:33:28 +0000 (0:00:00.125) 0:03:14.260 **** 2025-09-27 21:33:49.138375 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-09-27 21:33:49.138385 | orchestrator | 2025-09-27 21:33:49.138394 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-09-27 21:33:49.138404 | orchestrator | Saturday 27 September 2025 21:33:28 +0000 (0:00:00.185) 0:03:14.446 **** 2025-09-27 21:33:49.138413 | orchestrator | changed: [testbed-manager] 2025-09-27 21:33:49.138423 | orchestrator | 2025-09-27 21:33:49.138433 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-09-27 21:33:49.138442 | orchestrator | 2025-09-27 21:33:49.138456 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-09-27 21:33:49.138472 | orchestrator | Saturday 27 September 2025 21:33:33 +0000 (0:00:04.923) 0:03:19.369 **** 2025-09-27 21:33:49.138482 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:33:49.138492 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:33:49.138501 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:33:49.138511 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:33:49.138520 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:33:49.138530 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:33:49.138539 | orchestrator | 2025-09-27 21:33:49.138596 | orchestrator | TASK [Manage labels] *********************************************************** 2025-09-27 21:33:49.138606 | orchestrator | Saturday 27 September 2025 21:33:34 +0000 (0:00:00.783) 0:03:20.153 **** 2025-09-27 21:33:49.138616 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-27 21:33:49.138625 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-27 21:33:49.138635 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-27 21:33:49.138644 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-27 21:33:49.138654 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-27 21:33:49.138664 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-27 21:33:49.138673 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-27 21:33:49.138683 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-27 21:33:49.138692 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-27 21:33:49.138702 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-27 21:33:49.138711 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-27 21:33:49.138727 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-27 21:33:49.138736 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-27 21:33:49.138746 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-27 21:33:49.138755 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-27 21:33:49.138765 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-27 21:33:49.138775 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-27 21:33:49.138784 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-27 21:33:49.138794 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-27 21:33:49.138803 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-27 21:33:49.138813 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-27 21:33:49.138822 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-27 21:33:49.138832 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-27 21:33:49.138842 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-27 21:33:49.138851 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-27 21:33:49.138858 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-27 21:33:49.138866 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-27 21:33:49.138874 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-27 21:33:49.138882 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-27 21:33:49.138890 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-27 21:33:49.138898 | orchestrator | 2025-09-27 21:33:49.138906 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-09-27 21:33:49.138913 | orchestrator | Saturday 27 September 2025 21:33:46 +0000 (0:00:12.037) 0:03:32.191 **** 2025-09-27 21:33:49.138921 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:33:49.138929 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:33:49.138937 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:33:49.138945 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.138953 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:33:49.138960 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:33:49.138968 | orchestrator | 2025-09-27 21:33:49.138976 | orchestrator | TASK [Manage taints] *********************************************************** 2025-09-27 21:33:49.138984 | orchestrator | Saturday 27 September 2025 21:33:46 +0000 (0:00:00.610) 0:03:32.801 **** 2025-09-27 21:33:49.138992 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:33:49.138999 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:33:49.139007 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:33:49.139015 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:33:49.139023 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:33:49.139031 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:33:49.139039 | orchestrator | 2025-09-27 21:33:49.139050 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:33:49.139063 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:33:49.139072 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-09-27 21:33:49.139085 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-27 21:33:49.139093 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-27 21:33:49.139101 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-27 21:33:49.139109 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-27 21:33:49.139117 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-27 21:33:49.139125 | orchestrator | 2025-09-27 21:33:49.139133 | orchestrator | 2025-09-27 21:33:49.139141 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:33:49.139148 | orchestrator | Saturday 27 September 2025 21:33:47 +0000 (0:00:00.400) 0:03:33.201 **** 2025-09-27 21:33:49.139156 | orchestrator | =============================================================================== 2025-09-27 21:33:49.139164 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.66s 2025-09-27 21:33:49.139172 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 26.59s 2025-09-27 21:33:49.139180 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 12.56s 2025-09-27 21:33:49.139188 | orchestrator | Manage labels ---------------------------------------------------------- 12.04s 2025-09-27 21:33:49.139195 | orchestrator | kubectl : Install required packages ------------------------------------ 10.87s 2025-09-27 21:33:49.139203 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.33s 2025-09-27 21:33:49.139211 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.32s 2025-09-27 21:33:49.139219 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 4.92s 2025-09-27 21:33:49.139227 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.22s 2025-09-27 21:33:49.139235 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.08s 2025-09-27 21:33:49.139242 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.02s 2025-09-27 21:33:49.139250 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.84s 2025-09-27 21:33:49.139258 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 1.83s 2025-09-27 21:33:49.139266 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 1.77s 2025-09-27 21:33:49.139274 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 1.72s 2025-09-27 21:33:49.139281 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.72s 2025-09-27 21:33:49.139289 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 1.67s 2025-09-27 21:33:49.139297 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.53s 2025-09-27 21:33:49.139305 | orchestrator | k3s_server : Download vip rbac manifest to first master ----------------- 1.45s 2025-09-27 21:33:49.139313 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.38s 2025-09-27 21:33:49.139320 | orchestrator | 2025-09-27 21:33:49 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:33:49.139328 | orchestrator | 2025-09-27 21:33:49 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:33:49.139337 | orchestrator | 2025-09-27 21:33:49 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:33:49.139352 | orchestrator | 2025-09-27 21:33:49 | INFO  | Task 249b3953-9521-4215-a411-ea77cd057982 is in state STARTED 2025-09-27 21:33:49.139360 | orchestrator | 2025-09-27 21:33:49 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:33:52.182240 | orchestrator | 2025-09-27 21:33:52 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:33:52.182319 | orchestrator | 2025-09-27 21:33:52 | INFO  | Task 8d6a357e-88c4-4fde-bda8-37e4d1668a25 is in state STARTED 2025-09-27 21:33:52.182333 | orchestrator | 2025-09-27 21:33:52 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:33:52.182861 | orchestrator | 2025-09-27 21:33:52 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:33:52.184155 | orchestrator | 2025-09-27 21:33:52 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:33:52.185344 | orchestrator | 2025-09-27 21:33:52 | INFO  | Task 249b3953-9521-4215-a411-ea77cd057982 is in state STARTED 2025-09-27 21:33:52.186678 | orchestrator | 2025-09-27 21:33:52 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:33:55.211721 | orchestrator | 2025-09-27 21:33:55 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:33:55.212962 | orchestrator | 2025-09-27 21:33:55 | INFO  | Task 8d6a357e-88c4-4fde-bda8-37e4d1668a25 is in state STARTED 2025-09-27 21:33:55.214093 | orchestrator | 2025-09-27 21:33:55 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:33:55.214412 | orchestrator | 2025-09-27 21:33:55 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:33:55.215329 | orchestrator | 2025-09-27 21:33:55 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:33:55.216460 | orchestrator | 2025-09-27 21:33:55 | INFO  | Task 249b3953-9521-4215-a411-ea77cd057982 is in state SUCCESS 2025-09-27 21:33:55.216532 | orchestrator | 2025-09-27 21:33:55 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:33:58.251831 | orchestrator | 2025-09-27 21:33:58 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:33:58.253661 | orchestrator | 2025-09-27 21:33:58 | INFO  | Task 8d6a357e-88c4-4fde-bda8-37e4d1668a25 is in state SUCCESS 2025-09-27 21:33:58.254978 | orchestrator | 2025-09-27 21:33:58 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:33:58.257833 | orchestrator | 2025-09-27 21:33:58 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:33:58.259609 | orchestrator | 2025-09-27 21:33:58 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:33:58.259800 | orchestrator | 2025-09-27 21:33:58 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:34:01.312960 | orchestrator | 2025-09-27 21:34:01 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:34:01.314914 | orchestrator | 2025-09-27 21:34:01 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:34:01.317154 | orchestrator | 2025-09-27 21:34:01 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:34:01.319465 | orchestrator | 2025-09-27 21:34:01 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:34:01.319581 | orchestrator | 2025-09-27 21:34:01 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:34:04.476656 | orchestrator | 2025-09-27 21:34:04 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:34:04.476766 | orchestrator | 2025-09-27 21:34:04 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:34:04.477253 | orchestrator | 2025-09-27 21:34:04 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:34:04.477897 | orchestrator | 2025-09-27 21:34:04 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:34:04.477920 | orchestrator | 2025-09-27 21:34:04 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:34:07.517780 | orchestrator | 2025-09-27 21:34:07 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:34:07.517883 | orchestrator | 2025-09-27 21:34:07 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:34:07.519105 | orchestrator | 2025-09-27 21:34:07 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:34:07.520613 | orchestrator | 2025-09-27 21:34:07 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:34:07.521009 | orchestrator | 2025-09-27 21:34:07 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:34:10.562695 | orchestrator | 2025-09-27 21:34:10 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:34:10.563114 | orchestrator | 2025-09-27 21:34:10 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:34:10.564001 | orchestrator | 2025-09-27 21:34:10 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:34:10.568623 | orchestrator | 2025-09-27 21:34:10 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:34:10.568658 | orchestrator | 2025-09-27 21:34:10 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:34:13.603961 | orchestrator | 2025-09-27 21:34:13 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:34:13.605433 | orchestrator | 2025-09-27 21:34:13 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:34:13.609225 | orchestrator | 2025-09-27 21:34:13 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:34:13.609283 | orchestrator | 2025-09-27 21:34:13 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:34:13.609296 | orchestrator | 2025-09-27 21:34:13 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:34:16.659434 | orchestrator | 2025-09-27 21:34:16 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:34:16.663237 | orchestrator | 2025-09-27 21:34:16 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:34:16.665715 | orchestrator | 2025-09-27 21:34:16 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:34:16.668742 | orchestrator | 2025-09-27 21:34:16 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:34:16.669426 | orchestrator | 2025-09-27 21:34:16 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:34:19.714348 | orchestrator | 2025-09-27 21:34:19 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:34:19.716765 | orchestrator | 2025-09-27 21:34:19 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:34:19.718736 | orchestrator | 2025-09-27 21:34:19 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:34:19.720770 | orchestrator | 2025-09-27 21:34:19 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:34:19.721053 | orchestrator | 2025-09-27 21:34:19 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:34:22.761709 | orchestrator | 2025-09-27 21:34:22 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:34:22.763166 | orchestrator | 2025-09-27 21:34:22 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:34:22.764789 | orchestrator | 2025-09-27 21:34:22 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:34:22.765783 | orchestrator | 2025-09-27 21:34:22 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:34:22.766118 | orchestrator | 2025-09-27 21:34:22 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:34:25.805784 | orchestrator | 2025-09-27 21:34:25 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:34:25.807812 | orchestrator | 2025-09-27 21:34:25 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:34:25.809775 | orchestrator | 2025-09-27 21:34:25 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:34:25.812536 | orchestrator | 2025-09-27 21:34:25 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:34:25.812772 | orchestrator | 2025-09-27 21:34:25 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:34:28.846826 | orchestrator | 2025-09-27 21:34:28 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:34:28.847349 | orchestrator | 2025-09-27 21:34:28 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:34:28.849683 | orchestrator | 2025-09-27 21:34:28 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:34:28.853738 | orchestrator | 2025-09-27 21:34:28 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:34:28.853767 | orchestrator | 2025-09-27 21:34:28 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:34:31.893248 | orchestrator | 2025-09-27 21:34:31 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:34:31.893831 | orchestrator | 2025-09-27 21:34:31 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:34:31.895006 | orchestrator | 2025-09-27 21:34:31 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:34:31.895997 | orchestrator | 2025-09-27 21:34:31 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:34:31.896028 | orchestrator | 2025-09-27 21:34:31 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:34:34.936593 | orchestrator | 2025-09-27 21:34:34 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:34:34.936693 | orchestrator | 2025-09-27 21:34:34 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:34:34.938350 | orchestrator | 2025-09-27 21:34:34 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:34:34.939658 | orchestrator | 2025-09-27 21:34:34 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:34:34.939705 | orchestrator | 2025-09-27 21:34:34 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:34:37.988073 | orchestrator | 2025-09-27 21:34:37 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:34:37.989672 | orchestrator | 2025-09-27 21:34:37 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:34:37.991514 | orchestrator | 2025-09-27 21:34:37 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:34:37.993024 | orchestrator | 2025-09-27 21:34:37 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:34:37.993421 | orchestrator | 2025-09-27 21:34:37 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:34:41.046698 | orchestrator | 2025-09-27 21:34:41 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:34:41.047654 | orchestrator | 2025-09-27 21:34:41 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:34:41.050399 | orchestrator | 2025-09-27 21:34:41 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:34:41.053224 | orchestrator | 2025-09-27 21:34:41 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:34:41.053741 | orchestrator | 2025-09-27 21:34:41 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:34:44.101772 | orchestrator | 2025-09-27 21:34:44 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:34:44.102928 | orchestrator | 2025-09-27 21:34:44 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:34:44.103630 | orchestrator | 2025-09-27 21:34:44 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:34:44.104413 | orchestrator | 2025-09-27 21:34:44 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:34:44.104435 | orchestrator | 2025-09-27 21:34:44 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:34:47.142400 | orchestrator | 2025-09-27 21:34:47 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:34:47.142671 | orchestrator | 2025-09-27 21:34:47 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:34:47.143652 | orchestrator | 2025-09-27 21:34:47 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:34:47.144580 | orchestrator | 2025-09-27 21:34:47 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:34:47.145883 | orchestrator | 2025-09-27 21:34:47 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:34:50.191868 | orchestrator | 2025-09-27 21:34:50 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:34:50.193143 | orchestrator | 2025-09-27 21:34:50 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:34:50.194578 | orchestrator | 2025-09-27 21:34:50 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:34:50.196395 | orchestrator | 2025-09-27 21:34:50 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:34:50.196420 | orchestrator | 2025-09-27 21:34:50 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:34:53.230076 | orchestrator | 2025-09-27 21:34:53 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:34:53.231516 | orchestrator | 2025-09-27 21:34:53 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:34:53.232342 | orchestrator | 2025-09-27 21:34:53 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:34:53.234810 | orchestrator | 2025-09-27 21:34:53 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:34:53.234911 | orchestrator | 2025-09-27 21:34:53 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:34:56.273526 | orchestrator | 2025-09-27 21:34:56 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:34:56.273713 | orchestrator | 2025-09-27 21:34:56 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:34:56.274956 | orchestrator | 2025-09-27 21:34:56 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:34:56.275593 | orchestrator | 2025-09-27 21:34:56 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:34:56.275616 | orchestrator | 2025-09-27 21:34:56 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:34:59.309545 | orchestrator | 2025-09-27 21:34:59 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:34:59.309813 | orchestrator | 2025-09-27 21:34:59 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:34:59.310339 | orchestrator | 2025-09-27 21:34:59 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:34:59.310887 | orchestrator | 2025-09-27 21:34:59 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:34:59.310981 | orchestrator | 2025-09-27 21:34:59 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:35:02.349707 | orchestrator | 2025-09-27 21:35:02 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:35:02.349799 | orchestrator | 2025-09-27 21:35:02 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:35:02.349812 | orchestrator | 2025-09-27 21:35:02 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:35:02.349823 | orchestrator | 2025-09-27 21:35:02 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:35:02.349840 | orchestrator | 2025-09-27 21:35:02 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:35:05.390189 | orchestrator | 2025-09-27 21:35:05 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:35:05.390284 | orchestrator | 2025-09-27 21:35:05 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:35:05.391150 | orchestrator | 2025-09-27 21:35:05 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:35:05.396617 | orchestrator | 2025-09-27 21:35:05 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:35:05.396640 | orchestrator | 2025-09-27 21:35:05 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:35:08.429677 | orchestrator | 2025-09-27 21:35:08 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:35:08.432852 | orchestrator | 2025-09-27 21:35:08 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:35:08.434568 | orchestrator | 2025-09-27 21:35:08 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:35:08.436498 | orchestrator | 2025-09-27 21:35:08 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:35:08.437886 | orchestrator | 2025-09-27 21:35:08 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:35:11.469347 | orchestrator | 2025-09-27 21:35:11 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:35:11.470137 | orchestrator | 2025-09-27 21:35:11 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:35:11.470988 | orchestrator | 2025-09-27 21:35:11 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:35:11.471845 | orchestrator | 2025-09-27 21:35:11 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:35:11.471873 | orchestrator | 2025-09-27 21:35:11 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:35:14.519298 | orchestrator | 2025-09-27 21:35:14 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:35:14.522944 | orchestrator | 2025-09-27 21:35:14 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state STARTED 2025-09-27 21:35:14.524928 | orchestrator | 2025-09-27 21:35:14 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:35:14.527510 | orchestrator | 2025-09-27 21:35:14 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:35:14.527542 | orchestrator | 2025-09-27 21:35:14 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:35:17.555215 | orchestrator | 2025-09-27 21:35:17 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:35:17.557055 | orchestrator | 2025-09-27 21:35:17 | INFO  | Task 66f165a9-457b-4073-8944-651bfd6cbf4d is in state SUCCESS 2025-09-27 21:35:17.558402 | orchestrator | 2025-09-27 21:35:17.558483 | orchestrator | 2025-09-27 21:35:17.558498 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-09-27 21:35:17.558511 | orchestrator | 2025-09-27 21:35:17.558522 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-27 21:35:17.558534 | orchestrator | Saturday 27 September 2025 21:33:50 +0000 (0:00:00.120) 0:00:00.120 **** 2025-09-27 21:35:17.558546 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-27 21:35:17.558557 | orchestrator | 2025-09-27 21:35:17.558568 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-27 21:35:17.558580 | orchestrator | Saturday 27 September 2025 21:33:51 +0000 (0:00:00.810) 0:00:00.931 **** 2025-09-27 21:35:17.558591 | orchestrator | changed: [testbed-manager] 2025-09-27 21:35:17.558602 | orchestrator | 2025-09-27 21:35:17.558613 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-09-27 21:35:17.558624 | orchestrator | Saturday 27 September 2025 21:33:52 +0000 (0:00:01.173) 0:00:02.105 **** 2025-09-27 21:35:17.558635 | orchestrator | changed: [testbed-manager] 2025-09-27 21:35:17.558646 | orchestrator | 2025-09-27 21:35:17.558657 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:35:17.558668 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:35:17.558681 | orchestrator | 2025-09-27 21:35:17.558692 | orchestrator | 2025-09-27 21:35:17.558703 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:35:17.558714 | orchestrator | Saturday 27 September 2025 21:33:53 +0000 (0:00:00.340) 0:00:02.445 **** 2025-09-27 21:35:17.558725 | orchestrator | =============================================================================== 2025-09-27 21:35:17.558736 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.17s 2025-09-27 21:35:17.558747 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.81s 2025-09-27 21:35:17.558757 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.34s 2025-09-27 21:35:17.558768 | orchestrator | 2025-09-27 21:35:17.558779 | orchestrator | 2025-09-27 21:35:17.558790 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-27 21:35:17.558801 | orchestrator | 2025-09-27 21:35:17.558812 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-27 21:35:17.558840 | orchestrator | Saturday 27 September 2025 21:33:51 +0000 (0:00:00.201) 0:00:00.201 **** 2025-09-27 21:35:17.558851 | orchestrator | ok: [testbed-manager] 2025-09-27 21:35:17.558863 | orchestrator | 2025-09-27 21:35:17.558874 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-27 21:35:17.558885 | orchestrator | Saturday 27 September 2025 21:33:51 +0000 (0:00:00.535) 0:00:00.737 **** 2025-09-27 21:35:17.558896 | orchestrator | ok: [testbed-manager] 2025-09-27 21:35:17.558907 | orchestrator | 2025-09-27 21:35:17.558918 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-27 21:35:17.558952 | orchestrator | Saturday 27 September 2025 21:33:52 +0000 (0:00:00.577) 0:00:01.314 **** 2025-09-27 21:35:17.558963 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-27 21:35:17.558974 | orchestrator | 2025-09-27 21:35:17.558985 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-27 21:35:17.558996 | orchestrator | Saturday 27 September 2025 21:33:53 +0000 (0:00:00.722) 0:00:02.037 **** 2025-09-27 21:35:17.559007 | orchestrator | changed: [testbed-manager] 2025-09-27 21:35:17.559020 | orchestrator | 2025-09-27 21:35:17.559032 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-27 21:35:17.559044 | orchestrator | Saturday 27 September 2025 21:33:54 +0000 (0:00:01.012) 0:00:03.049 **** 2025-09-27 21:35:17.559055 | orchestrator | changed: [testbed-manager] 2025-09-27 21:35:17.559067 | orchestrator | 2025-09-27 21:35:17.559079 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-27 21:35:17.559091 | orchestrator | Saturday 27 September 2025 21:33:54 +0000 (0:00:00.636) 0:00:03.685 **** 2025-09-27 21:35:17.559104 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-27 21:35:17.559115 | orchestrator | 2025-09-27 21:35:17.559128 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-27 21:35:17.559140 | orchestrator | Saturday 27 September 2025 21:33:56 +0000 (0:00:01.315) 0:00:05.001 **** 2025-09-27 21:35:17.559152 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-27 21:35:17.559163 | orchestrator | 2025-09-27 21:35:17.559174 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-27 21:35:17.559185 | orchestrator | Saturday 27 September 2025 21:33:56 +0000 (0:00:00.705) 0:00:05.706 **** 2025-09-27 21:35:17.559195 | orchestrator | ok: [testbed-manager] 2025-09-27 21:35:17.559206 | orchestrator | 2025-09-27 21:35:17.559217 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-27 21:35:17.559228 | orchestrator | Saturday 27 September 2025 21:33:57 +0000 (0:00:00.338) 0:00:06.044 **** 2025-09-27 21:35:17.559239 | orchestrator | ok: [testbed-manager] 2025-09-27 21:35:17.559250 | orchestrator | 2025-09-27 21:35:17.559261 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:35:17.559272 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:35:17.559282 | orchestrator | 2025-09-27 21:35:17.559293 | orchestrator | 2025-09-27 21:35:17.559304 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:35:17.559315 | orchestrator | Saturday 27 September 2025 21:33:57 +0000 (0:00:00.260) 0:00:06.305 **** 2025-09-27 21:35:17.559326 | orchestrator | =============================================================================== 2025-09-27 21:35:17.559336 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.32s 2025-09-27 21:35:17.559347 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.01s 2025-09-27 21:35:17.559358 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.72s 2025-09-27 21:35:17.559380 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.71s 2025-09-27 21:35:17.559391 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.64s 2025-09-27 21:35:17.559402 | orchestrator | Create .kube directory -------------------------------------------------- 0.58s 2025-09-27 21:35:17.559413 | orchestrator | Get home directory of operator user ------------------------------------- 0.54s 2025-09-27 21:35:17.559423 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.34s 2025-09-27 21:35:17.559472 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.26s 2025-09-27 21:35:17.559484 | orchestrator | 2025-09-27 21:35:17.559495 | orchestrator | 2025-09-27 21:35:17.559505 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-09-27 21:35:17.559516 | orchestrator | 2025-09-27 21:35:17.559536 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-27 21:35:17.559547 | orchestrator | Saturday 27 September 2025 21:32:50 +0000 (0:00:00.077) 0:00:00.077 **** 2025-09-27 21:35:17.559557 | orchestrator | ok: [localhost] => { 2025-09-27 21:35:17.559569 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-09-27 21:35:17.559580 | orchestrator | } 2025-09-27 21:35:17.559591 | orchestrator | 2025-09-27 21:35:17.559602 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-09-27 21:35:17.559613 | orchestrator | Saturday 27 September 2025 21:32:50 +0000 (0:00:00.033) 0:00:00.111 **** 2025-09-27 21:35:17.559625 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-09-27 21:35:17.559638 | orchestrator | ...ignoring 2025-09-27 21:35:17.559649 | orchestrator | 2025-09-27 21:35:17.559660 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-09-27 21:35:17.559671 | orchestrator | Saturday 27 September 2025 21:32:53 +0000 (0:00:02.721) 0:00:02.832 **** 2025-09-27 21:35:17.559681 | orchestrator | skipping: [localhost] 2025-09-27 21:35:17.559692 | orchestrator | 2025-09-27 21:35:17.559703 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-09-27 21:35:17.559713 | orchestrator | Saturday 27 September 2025 21:32:53 +0000 (0:00:00.052) 0:00:02.884 **** 2025-09-27 21:35:17.559724 | orchestrator | ok: [localhost] 2025-09-27 21:35:17.559735 | orchestrator | 2025-09-27 21:35:17.559751 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 21:35:17.559762 | orchestrator | 2025-09-27 21:35:17.559773 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 21:35:17.559784 | orchestrator | Saturday 27 September 2025 21:32:53 +0000 (0:00:00.165) 0:00:03.050 **** 2025-09-27 21:35:17.559795 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:35:17.559806 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:35:17.559817 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:35:17.559936 | orchestrator | 2025-09-27 21:35:17.559947 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 21:35:17.559958 | orchestrator | Saturday 27 September 2025 21:32:53 +0000 (0:00:00.330) 0:00:03.380 **** 2025-09-27 21:35:17.559969 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-09-27 21:35:17.559981 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-09-27 21:35:17.559992 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-09-27 21:35:17.560002 | orchestrator | 2025-09-27 21:35:17.560013 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-09-27 21:35:17.560024 | orchestrator | 2025-09-27 21:35:17.560035 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-27 21:35:17.560045 | orchestrator | Saturday 27 September 2025 21:32:54 +0000 (0:00:00.556) 0:00:03.937 **** 2025-09-27 21:35:17.560057 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:35:17.560068 | orchestrator | 2025-09-27 21:35:17.560079 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-27 21:35:17.560089 | orchestrator | Saturday 27 September 2025 21:32:55 +0000 (0:00:00.555) 0:00:04.493 **** 2025-09-27 21:35:17.560101 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:35:17.560111 | orchestrator | 2025-09-27 21:35:17.560122 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-09-27 21:35:17.560133 | orchestrator | Saturday 27 September 2025 21:32:56 +0000 (0:00:01.524) 0:00:06.017 **** 2025-09-27 21:35:17.560144 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:35:17.560155 | orchestrator | 2025-09-27 21:35:17.560166 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-09-27 21:35:17.560177 | orchestrator | Saturday 27 September 2025 21:32:56 +0000 (0:00:00.337) 0:00:06.354 **** 2025-09-27 21:35:17.560196 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:35:17.560207 | orchestrator | 2025-09-27 21:35:17.560218 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-09-27 21:35:17.560229 | orchestrator | Saturday 27 September 2025 21:32:57 +0000 (0:00:00.873) 0:00:07.228 **** 2025-09-27 21:35:17.560240 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:35:17.560251 | orchestrator | 2025-09-27 21:35:17.560262 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-09-27 21:35:17.560272 | orchestrator | Saturday 27 September 2025 21:32:58 +0000 (0:00:00.582) 0:00:07.811 **** 2025-09-27 21:35:17.560283 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:35:17.560294 | orchestrator | 2025-09-27 21:35:17.560305 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-27 21:35:17.560316 | orchestrator | Saturday 27 September 2025 21:32:59 +0000 (0:00:01.310) 0:00:09.121 **** 2025-09-27 21:35:17.560327 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:35:17.560338 | orchestrator | 2025-09-27 21:35:17.560349 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-27 21:35:17.560368 | orchestrator | Saturday 27 September 2025 21:33:00 +0000 (0:00:01.178) 0:00:10.299 **** 2025-09-27 21:35:17.560395 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:35:17.560407 | orchestrator | 2025-09-27 21:35:17.560450 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-09-27 21:35:17.560464 | orchestrator | Saturday 27 September 2025 21:33:02 +0000 (0:00:01.148) 0:00:11.448 **** 2025-09-27 21:35:17.560474 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:35:17.560485 | orchestrator | 2025-09-27 21:35:17.560496 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-09-27 21:35:17.560507 | orchestrator | Saturday 27 September 2025 21:33:02 +0000 (0:00:00.696) 0:00:12.145 **** 2025-09-27 21:35:17.560518 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:35:17.560529 | orchestrator | 2025-09-27 21:35:17.560539 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-09-27 21:35:17.560551 | orchestrator | Saturday 27 September 2025 21:33:03 +0000 (0:00:01.191) 0:00:13.336 **** 2025-09-27 21:35:17.560575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-27 21:35:17.560595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-27 21:35:17.560617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-27 21:35:17.560630 | orchestrator | 2025-09-27 21:35:17.560642 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-09-27 21:35:17.560654 | orchestrator | Saturday 27 September 2025 21:33:05 +0000 (0:00:01.316) 0:00:14.653 **** 2025-09-27 21:35:17.560676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-27 21:35:17.560696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-27 21:35:17.560710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-27 21:35:17.560730 | orchestrator | 2025-09-27 21:35:17.560742 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-09-27 21:35:17.560754 | orchestrator | Saturday 27 September 2025 21:33:07 +0000 (0:00:01.884) 0:00:16.537 **** 2025-09-27 21:35:17.560766 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-27 21:35:17.560778 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-27 21:35:17.560791 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-27 21:35:17.560803 | orchestrator | 2025-09-27 21:35:17.560815 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-09-27 21:35:17.560827 | orchestrator | Saturday 27 September 2025 21:33:09 +0000 (0:00:02.069) 0:00:18.606 **** 2025-09-27 21:35:17.560839 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-27 21:35:17.560851 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-27 21:35:17.560877 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-27 21:35:17.560889 | orchestrator | 2025-09-27 21:35:17.560901 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-09-27 21:35:17.560918 | orchestrator | Saturday 27 September 2025 21:33:11 +0000 (0:00:01.814) 0:00:20.421 **** 2025-09-27 21:35:17.560929 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-27 21:35:17.560940 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-27 21:35:17.560950 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-27 21:35:17.560961 | orchestrator | 2025-09-27 21:35:17.560972 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-09-27 21:35:17.560982 | orchestrator | Saturday 27 September 2025 21:33:12 +0000 (0:00:01.502) 0:00:21.923 **** 2025-09-27 21:35:17.560993 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-27 21:35:17.561004 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-27 21:35:17.561014 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-27 21:35:17.561025 | orchestrator | 2025-09-27 21:35:17.561036 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-09-27 21:35:17.561046 | orchestrator | Saturday 27 September 2025 21:33:14 +0000 (0:00:01.933) 0:00:23.857 **** 2025-09-27 21:35:17.561057 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-27 21:35:17.561068 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-27 21:35:17.561079 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-27 21:35:17.561090 | orchestrator | 2025-09-27 21:35:17.561100 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-09-27 21:35:17.561118 | orchestrator | Saturday 27 September 2025 21:33:16 +0000 (0:00:01.643) 0:00:25.500 **** 2025-09-27 21:35:17.561128 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-27 21:35:17.561139 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-27 21:35:17.561154 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-27 21:35:17.561165 | orchestrator | 2025-09-27 21:35:17.561176 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-27 21:35:17.561187 | orchestrator | Saturday 27 September 2025 21:33:17 +0000 (0:00:01.806) 0:00:27.307 **** 2025-09-27 21:35:17.561197 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:35:17.561208 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:35:17.561219 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:35:17.561229 | orchestrator | 2025-09-27 21:35:17.561240 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-09-27 21:35:17.561251 | orchestrator | Saturday 27 September 2025 21:33:19 +0000 (0:00:01.274) 0:00:28.581 **** 2025-09-27 21:35:17.561263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-27 21:35:17.561281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-27 21:35:17.561294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-27 21:35:17.561312 | orchestrator | 2025-09-27 21:35:17.561323 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-09-27 21:35:17.561334 | orchestrator | Saturday 27 September 2025 21:33:21 +0000 (0:00:02.088) 0:00:30.669 **** 2025-09-27 21:35:17.561345 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:35:17.561356 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:35:17.561366 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:35:17.561377 | orchestrator | 2025-09-27 21:35:17.561388 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-09-27 21:35:17.561398 | orchestrator | Saturday 27 September 2025 21:33:22 +0000 (0:00:00.865) 0:00:31.534 **** 2025-09-27 21:35:17.561414 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:35:17.561425 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:35:17.561498 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:35:17.561510 | orchestrator | 2025-09-27 21:35:17.561520 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-09-27 21:35:17.561531 | orchestrator | Saturday 27 September 2025 21:33:31 +0000 (0:00:09.107) 0:00:40.641 **** 2025-09-27 21:35:17.561542 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:35:17.561553 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:35:17.561563 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:35:17.561574 | orchestrator | 2025-09-27 21:35:17.561585 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-27 21:35:17.561596 | orchestrator | 2025-09-27 21:35:17.561606 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-27 21:35:17.561617 | orchestrator | Saturday 27 September 2025 21:33:32 +0000 (0:00:00.853) 0:00:41.495 **** 2025-09-27 21:35:17.561628 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:35:17.561638 | orchestrator | 2025-09-27 21:35:17.561649 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-27 21:35:17.561660 | orchestrator | Saturday 27 September 2025 21:33:32 +0000 (0:00:00.800) 0:00:42.296 **** 2025-09-27 21:35:17.561671 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:35:17.561681 | orchestrator | 2025-09-27 21:35:17.561692 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-27 21:35:17.561703 | orchestrator | Saturday 27 September 2025 21:33:33 +0000 (0:00:00.225) 0:00:42.522 **** 2025-09-27 21:35:17.561714 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:35:17.561724 | orchestrator | 2025-09-27 21:35:17.561735 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-27 21:35:17.561746 | orchestrator | Saturday 27 September 2025 21:33:35 +0000 (0:00:02.105) 0:00:44.627 **** 2025-09-27 21:35:17.561756 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:35:17.561767 | orchestrator | 2025-09-27 21:35:17.561778 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-27 21:35:17.561788 | orchestrator | 2025-09-27 21:35:17.561799 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-27 21:35:17.561810 | orchestrator | Saturday 27 September 2025 21:34:32 +0000 (0:00:57.413) 0:01:42.040 **** 2025-09-27 21:35:17.561820 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:35:17.561831 | orchestrator | 2025-09-27 21:35:17.561842 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-27 21:35:17.561852 | orchestrator | Saturday 27 September 2025 21:34:33 +0000 (0:00:00.622) 0:01:42.663 **** 2025-09-27 21:35:17.561863 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:35:17.561874 | orchestrator | 2025-09-27 21:35:17.561884 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-27 21:35:17.561903 | orchestrator | Saturday 27 September 2025 21:34:33 +0000 (0:00:00.244) 0:01:42.908 **** 2025-09-27 21:35:17.561914 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:35:17.561925 | orchestrator | 2025-09-27 21:35:17.561935 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-27 21:35:17.561946 | orchestrator | Saturday 27 September 2025 21:34:35 +0000 (0:00:01.951) 0:01:44.859 **** 2025-09-27 21:35:17.561957 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:35:17.561968 | orchestrator | 2025-09-27 21:35:17.561978 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-27 21:35:17.561989 | orchestrator | 2025-09-27 21:35:17.562000 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-27 21:35:17.562009 | orchestrator | Saturday 27 September 2025 21:34:52 +0000 (0:00:17.244) 0:02:02.103 **** 2025-09-27 21:35:17.562062 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:35:17.562073 | orchestrator | 2025-09-27 21:35:17.562088 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-27 21:35:17.562099 | orchestrator | Saturday 27 September 2025 21:34:53 +0000 (0:00:00.693) 0:02:02.796 **** 2025-09-27 21:35:17.562108 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:35:17.562118 | orchestrator | 2025-09-27 21:35:17.562127 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-27 21:35:17.562137 | orchestrator | Saturday 27 September 2025 21:34:53 +0000 (0:00:00.241) 0:02:03.038 **** 2025-09-27 21:35:17.562146 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:35:17.562156 | orchestrator | 2025-09-27 21:35:17.562166 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-27 21:35:17.562175 | orchestrator | Saturday 27 September 2025 21:34:55 +0000 (0:00:01.810) 0:02:04.848 **** 2025-09-27 21:35:17.562185 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:35:17.562194 | orchestrator | 2025-09-27 21:35:17.562204 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-09-27 21:35:17.562213 | orchestrator | 2025-09-27 21:35:17.562223 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-09-27 21:35:17.562232 | orchestrator | Saturday 27 September 2025 21:35:11 +0000 (0:00:16.417) 0:02:21.265 **** 2025-09-27 21:35:17.562241 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:35:17.562251 | orchestrator | 2025-09-27 21:35:17.562260 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-09-27 21:35:17.562270 | orchestrator | Saturday 27 September 2025 21:35:12 +0000 (0:00:00.565) 0:02:21.831 **** 2025-09-27 21:35:17.562280 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-27 21:35:17.562289 | orchestrator | enable_outward_rabbitmq_True 2025-09-27 21:35:17.562298 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-27 21:35:17.562308 | orchestrator | outward_rabbitmq_restart 2025-09-27 21:35:17.562317 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:35:17.562327 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:35:17.562336 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:35:17.562346 | orchestrator | 2025-09-27 21:35:17.562356 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-09-27 21:35:17.562365 | orchestrator | skipping: no hosts matched 2025-09-27 21:35:17.562375 | orchestrator | 2025-09-27 21:35:17.562384 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-09-27 21:35:17.562394 | orchestrator | skipping: no hosts matched 2025-09-27 21:35:17.562403 | orchestrator | 2025-09-27 21:35:17.562424 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-09-27 21:35:17.562460 | orchestrator | skipping: no hosts matched 2025-09-27 21:35:17.562477 | orchestrator | 2025-09-27 21:35:17.562492 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:35:17.562502 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-27 21:35:17.562520 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-27 21:35:17.562530 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:35:17.562540 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:35:17.562550 | orchestrator | 2025-09-27 21:35:17.562559 | orchestrator | 2025-09-27 21:35:17.562569 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:35:17.562579 | orchestrator | Saturday 27 September 2025 21:35:15 +0000 (0:00:02.742) 0:02:24.573 **** 2025-09-27 21:35:17.562589 | orchestrator | =============================================================================== 2025-09-27 21:35:17.562598 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 91.08s 2025-09-27 21:35:17.562608 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 9.11s 2025-09-27 21:35:17.562617 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.87s 2025-09-27 21:35:17.562627 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.74s 2025-09-27 21:35:17.562637 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.72s 2025-09-27 21:35:17.562646 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.12s 2025-09-27 21:35:17.562656 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.09s 2025-09-27 21:35:17.562665 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.07s 2025-09-27 21:35:17.562675 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.93s 2025-09-27 21:35:17.562684 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.88s 2025-09-27 21:35:17.562694 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.81s 2025-09-27 21:35:17.562704 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.81s 2025-09-27 21:35:17.562713 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.64s 2025-09-27 21:35:17.562723 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.52s 2025-09-27 21:35:17.562732 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.50s 2025-09-27 21:35:17.562742 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.32s 2025-09-27 21:35:17.562752 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 1.31s 2025-09-27 21:35:17.562766 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.27s 2025-09-27 21:35:17.562777 | orchestrator | rabbitmq : Remove ha-all policy from RabbitMQ --------------------------- 1.19s 2025-09-27 21:35:17.562786 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.18s 2025-09-27 21:35:17.562796 | orchestrator | 2025-09-27 21:35:17 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:35:17.562806 | orchestrator | 2025-09-27 21:35:17 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:35:17.562816 | orchestrator | 2025-09-27 21:35:17 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:35:20.594591 | orchestrator | 2025-09-27 21:35:20 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:35:20.596599 | orchestrator | 2025-09-27 21:35:20 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:35:20.597652 | orchestrator | 2025-09-27 21:35:20 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:35:20.597692 | orchestrator | 2025-09-27 21:35:20 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:35:23.628856 | orchestrator | 2025-09-27 21:35:23 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:35:23.629943 | orchestrator | 2025-09-27 21:35:23 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:35:23.631710 | orchestrator | 2025-09-27 21:35:23 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:35:23.631996 | orchestrator | 2025-09-27 21:35:23 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:35:26.665950 | orchestrator | 2025-09-27 21:35:26 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:35:26.666280 | orchestrator | 2025-09-27 21:35:26 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:35:26.667868 | orchestrator | 2025-09-27 21:35:26 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:35:26.667895 | orchestrator | 2025-09-27 21:35:26 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:35:29.705233 | orchestrator | 2025-09-27 21:35:29 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:35:29.705564 | orchestrator | 2025-09-27 21:35:29 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:35:29.706199 | orchestrator | 2025-09-27 21:35:29 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:35:29.706227 | orchestrator | 2025-09-27 21:35:29 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:35:32.751117 | orchestrator | 2025-09-27 21:35:32 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:35:32.752716 | orchestrator | 2025-09-27 21:35:32 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:35:32.755619 | orchestrator | 2025-09-27 21:35:32 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:35:32.757357 | orchestrator | 2025-09-27 21:35:32 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:35:35.800192 | orchestrator | 2025-09-27 21:35:35 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:35:35.801909 | orchestrator | 2025-09-27 21:35:35 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:35:35.803510 | orchestrator | 2025-09-27 21:35:35 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:35:35.803537 | orchestrator | 2025-09-27 21:35:35 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:35:38.837278 | orchestrator | 2025-09-27 21:35:38 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:35:38.839247 | orchestrator | 2025-09-27 21:35:38 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:35:38.840945 | orchestrator | 2025-09-27 21:35:38 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:35:38.841246 | orchestrator | 2025-09-27 21:35:38 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:35:41.878870 | orchestrator | 2025-09-27 21:35:41 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:35:41.881013 | orchestrator | 2025-09-27 21:35:41 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:35:41.883092 | orchestrator | 2025-09-27 21:35:41 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:35:41.883763 | orchestrator | 2025-09-27 21:35:41 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:35:44.931734 | orchestrator | 2025-09-27 21:35:44 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:35:44.933732 | orchestrator | 2025-09-27 21:35:44 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:35:44.936661 | orchestrator | 2025-09-27 21:35:44 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:35:44.936685 | orchestrator | 2025-09-27 21:35:44 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:35:47.973753 | orchestrator | 2025-09-27 21:35:47 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:35:47.975497 | orchestrator | 2025-09-27 21:35:47 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:35:47.977459 | orchestrator | 2025-09-27 21:35:47 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:35:47.977482 | orchestrator | 2025-09-27 21:35:47 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:35:51.014849 | orchestrator | 2025-09-27 21:35:51 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:35:51.014955 | orchestrator | 2025-09-27 21:35:51 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:35:51.014971 | orchestrator | 2025-09-27 21:35:51 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:35:51.014983 | orchestrator | 2025-09-27 21:35:51 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:35:54.053318 | orchestrator | 2025-09-27 21:35:54 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:35:54.054251 | orchestrator | 2025-09-27 21:35:54 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state STARTED 2025-09-27 21:35:54.054840 | orchestrator | 2025-09-27 21:35:54 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:35:54.055058 | orchestrator | 2025-09-27 21:35:54 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:35:57.099993 | orchestrator | 2025-09-27 21:35:57 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:35:57.102960 | orchestrator | 2025-09-27 21:35:57 | INFO  | Task 579a2840-0790-4637-ae1d-660eec761d85 is in state SUCCESS 2025-09-27 21:35:57.105348 | orchestrator | 2025-09-27 21:35:57.105796 | orchestrator | 2025-09-27 21:35:57.105818 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 21:35:57.105829 | orchestrator | 2025-09-27 21:35:57.105840 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 21:35:57.105850 | orchestrator | Saturday 27 September 2025 21:33:30 +0000 (0:00:00.190) 0:00:00.190 **** 2025-09-27 21:35:57.105860 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:35:57.105872 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:35:57.105882 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:35:57.105891 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:35:57.105901 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:35:57.105911 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:35:57.105921 | orchestrator | 2025-09-27 21:35:57.105984 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 21:35:57.105997 | orchestrator | Saturday 27 September 2025 21:33:31 +0000 (0:00:00.937) 0:00:01.128 **** 2025-09-27 21:35:57.106007 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-09-27 21:35:57.106078 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-09-27 21:35:57.106092 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-09-27 21:35:57.106102 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-09-27 21:35:57.106112 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-09-27 21:35:57.106121 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-09-27 21:35:57.106152 | orchestrator | 2025-09-27 21:35:57.106163 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-09-27 21:35:57.106172 | orchestrator | 2025-09-27 21:35:57.106182 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-09-27 21:35:57.106192 | orchestrator | Saturday 27 September 2025 21:33:33 +0000 (0:00:01.639) 0:00:02.768 **** 2025-09-27 21:35:57.106203 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:35:57.106214 | orchestrator | 2025-09-27 21:35:57.106224 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-09-27 21:35:57.106234 | orchestrator | Saturday 27 September 2025 21:33:34 +0000 (0:00:01.244) 0:00:04.012 **** 2025-09-27 21:35:57.106246 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.106259 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.106269 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.106279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.106298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.106308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.106318 | orchestrator | 2025-09-27 21:35:57.106339 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-09-27 21:35:57.106349 | orchestrator | Saturday 27 September 2025 21:33:36 +0000 (0:00:01.637) 0:00:05.650 **** 2025-09-27 21:35:57.106359 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.106376 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.106420 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.106430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.106440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.106450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.106460 | orchestrator | 2025-09-27 21:35:57.106470 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-09-27 21:35:57.106480 | orchestrator | Saturday 27 September 2025 21:33:39 +0000 (0:00:02.677) 0:00:08.328 **** 2025-09-27 21:35:57.106490 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.106501 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.106579 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.106600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.106617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.106629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.106641 | orchestrator | 2025-09-27 21:35:57.106652 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-09-27 21:35:57.106663 | orchestrator | Saturday 27 September 2025 21:33:40 +0000 (0:00:01.921) 0:00:10.249 **** 2025-09-27 21:35:57.106674 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.106685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.106696 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.106707 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.106723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.106734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.106751 | orchestrator | 2025-09-27 21:35:57.106770 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-09-27 21:35:57.106781 | orchestrator | Saturday 27 September 2025 21:33:42 +0000 (0:00:01.306) 0:00:11.555 **** 2025-09-27 21:35:57.106793 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.106804 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.106815 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.106826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.106837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.106848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.106858 | orchestrator | 2025-09-27 21:35:57.106869 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-09-27 21:35:57.106880 | orchestrator | Saturday 27 September 2025 21:33:43 +0000 (0:00:01.461) 0:00:13.017 **** 2025-09-27 21:35:57.106891 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:35:57.106902 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:35:57.106913 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:35:57.106923 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:35:57.106934 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:35:57.106944 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:35:57.106955 | orchestrator | 2025-09-27 21:35:57.106966 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-09-27 21:35:57.106976 | orchestrator | Saturday 27 September 2025 21:33:47 +0000 (0:00:03.386) 0:00:16.403 **** 2025-09-27 21:35:57.106995 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-09-27 21:35:57.107005 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-09-27 21:35:57.107015 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-09-27 21:35:57.107028 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-09-27 21:35:57.107038 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-09-27 21:35:57.107048 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-09-27 21:35:57.107057 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-27 21:35:57.107067 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-27 21:35:57.107081 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-27 21:35:57.107091 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-27 21:35:57.107101 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-27 21:35:57.107111 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-27 21:35:57.107120 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2025-09-27 21:35:57.107131 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2025-09-27 21:35:57.107141 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2025-09-27 21:35:57.107151 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2025-09-27 21:35:57.107161 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2025-09-27 21:35:57.107170 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2025-09-27 21:35:57.107180 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-27 21:35:57.107190 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-27 21:35:57.107200 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-27 21:35:57.107209 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-27 21:35:57.107219 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-27 21:35:57.107229 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-27 21:35:57.107238 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-27 21:35:57.107248 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-27 21:35:57.107257 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-27 21:35:57.107267 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-27 21:35:57.107276 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-27 21:35:57.107286 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-27 21:35:57.107301 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-27 21:35:57.107310 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-27 21:35:57.107320 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-27 21:35:57.107329 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-27 21:35:57.107339 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-27 21:35:57.107349 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-27 21:35:57.107358 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-27 21:35:57.107368 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-27 21:35:57.107394 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-27 21:35:57.107404 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-27 21:35:57.107417 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-27 21:35:57.107427 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-27 21:35:57.107437 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-09-27 21:35:57.107447 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-09-27 21:35:57.107462 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-09-27 21:35:57.107472 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-09-27 21:35:57.107481 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-09-27 21:35:57.107491 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-27 21:35:57.107501 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-09-27 21:35:57.107510 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-27 21:35:57.107520 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-27 21:35:57.107530 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-27 21:35:57.107539 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-27 21:35:57.107549 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-27 21:35:57.107558 | orchestrator | 2025-09-27 21:35:57.107568 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-27 21:35:57.107578 | orchestrator | Saturday 27 September 2025 21:34:09 +0000 (0:00:22.347) 0:00:38.750 **** 2025-09-27 21:35:57.107587 | orchestrator | 2025-09-27 21:35:57.107597 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-27 21:35:57.107607 | orchestrator | Saturday 27 September 2025 21:34:09 +0000 (0:00:00.300) 0:00:39.051 **** 2025-09-27 21:35:57.107622 | orchestrator | 2025-09-27 21:35:57.107631 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-27 21:35:57.107641 | orchestrator | Saturday 27 September 2025 21:34:09 +0000 (0:00:00.068) 0:00:39.120 **** 2025-09-27 21:35:57.107651 | orchestrator | 2025-09-27 21:35:57.107660 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-27 21:35:57.107670 | orchestrator | Saturday 27 September 2025 21:34:09 +0000 (0:00:00.069) 0:00:39.189 **** 2025-09-27 21:35:57.107679 | orchestrator | 2025-09-27 21:35:57.107689 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-27 21:35:57.107699 | orchestrator | Saturday 27 September 2025 21:34:09 +0000 (0:00:00.066) 0:00:39.255 **** 2025-09-27 21:35:57.107708 | orchestrator | 2025-09-27 21:35:57.107718 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-27 21:35:57.107727 | orchestrator | Saturday 27 September 2025 21:34:10 +0000 (0:00:00.080) 0:00:39.335 **** 2025-09-27 21:35:57.107737 | orchestrator | 2025-09-27 21:35:57.107746 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-09-27 21:35:57.107756 | orchestrator | Saturday 27 September 2025 21:34:10 +0000 (0:00:00.063) 0:00:39.399 **** 2025-09-27 21:35:57.107765 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:35:57.107775 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:35:57.107784 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:35:57.107794 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:35:57.107804 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:35:57.107813 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:35:57.107823 | orchestrator | 2025-09-27 21:35:57.107832 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-09-27 21:35:57.107842 | orchestrator | Saturday 27 September 2025 21:34:11 +0000 (0:00:01.606) 0:00:41.006 **** 2025-09-27 21:35:57.107852 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:35:57.107862 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:35:57.107871 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:35:57.107881 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:35:57.107890 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:35:57.107899 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:35:57.107909 | orchestrator | 2025-09-27 21:35:57.107919 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-09-27 21:35:57.107928 | orchestrator | 2025-09-27 21:35:57.107938 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-27 21:35:57.107947 | orchestrator | Saturday 27 September 2025 21:34:41 +0000 (0:00:29.835) 0:01:10.841 **** 2025-09-27 21:35:57.107957 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:35:57.107967 | orchestrator | 2025-09-27 21:35:57.107976 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-27 21:35:57.107986 | orchestrator | Saturday 27 September 2025 21:34:42 +0000 (0:00:00.702) 0:01:11.544 **** 2025-09-27 21:35:57.108000 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:35:57.108009 | orchestrator | 2025-09-27 21:35:57.108019 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-09-27 21:35:57.108029 | orchestrator | Saturday 27 September 2025 21:34:42 +0000 (0:00:00.581) 0:01:12.125 **** 2025-09-27 21:35:57.108039 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:35:57.108048 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:35:57.108058 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:35:57.108067 | orchestrator | 2025-09-27 21:35:57.108077 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-09-27 21:35:57.108087 | orchestrator | Saturday 27 September 2025 21:34:43 +0000 (0:00:00.950) 0:01:13.075 **** 2025-09-27 21:35:57.108096 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:35:57.108106 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:35:57.108125 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:35:57.108136 | orchestrator | 2025-09-27 21:35:57.108145 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-09-27 21:35:57.108155 | orchestrator | Saturday 27 September 2025 21:34:44 +0000 (0:00:00.382) 0:01:13.457 **** 2025-09-27 21:35:57.108165 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:35:57.108174 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:35:57.108184 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:35:57.108193 | orchestrator | 2025-09-27 21:35:57.108203 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-09-27 21:35:57.108212 | orchestrator | Saturday 27 September 2025 21:34:44 +0000 (0:00:00.318) 0:01:13.776 **** 2025-09-27 21:35:57.108222 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:35:57.108231 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:35:57.108241 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:35:57.108250 | orchestrator | 2025-09-27 21:35:57.108260 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-09-27 21:35:57.108270 | orchestrator | Saturday 27 September 2025 21:34:44 +0000 (0:00:00.373) 0:01:14.149 **** 2025-09-27 21:35:57.108279 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:35:57.108289 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:35:57.108298 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:35:57.108308 | orchestrator | 2025-09-27 21:35:57.108317 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-09-27 21:35:57.108327 | orchestrator | Saturday 27 September 2025 21:34:45 +0000 (0:00:00.653) 0:01:14.803 **** 2025-09-27 21:35:57.108337 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:35:57.108346 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:35:57.108356 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:35:57.108366 | orchestrator | 2025-09-27 21:35:57.108375 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-09-27 21:35:57.108434 | orchestrator | Saturday 27 September 2025 21:34:45 +0000 (0:00:00.354) 0:01:15.158 **** 2025-09-27 21:35:57.108445 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:35:57.108454 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:35:57.108464 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:35:57.108473 | orchestrator | 2025-09-27 21:35:57.108483 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-09-27 21:35:57.108493 | orchestrator | Saturday 27 September 2025 21:34:46 +0000 (0:00:00.316) 0:01:15.474 **** 2025-09-27 21:35:57.108503 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:35:57.108512 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:35:57.108522 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:35:57.108531 | orchestrator | 2025-09-27 21:35:57.108541 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-09-27 21:35:57.108551 | orchestrator | Saturday 27 September 2025 21:34:46 +0000 (0:00:00.285) 0:01:15.759 **** 2025-09-27 21:35:57.108560 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:35:57.108570 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:35:57.108579 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:35:57.108589 | orchestrator | 2025-09-27 21:35:57.108598 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-09-27 21:35:57.108608 | orchestrator | Saturday 27 September 2025 21:34:46 +0000 (0:00:00.447) 0:01:16.206 **** 2025-09-27 21:35:57.108618 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:35:57.108627 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:35:57.108637 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:35:57.108646 | orchestrator | 2025-09-27 21:35:57.108656 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-09-27 21:35:57.108666 | orchestrator | Saturday 27 September 2025 21:34:47 +0000 (0:00:00.300) 0:01:16.506 **** 2025-09-27 21:35:57.108675 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:35:57.108685 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:35:57.108694 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:35:57.108704 | orchestrator | 2025-09-27 21:35:57.108720 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-09-27 21:35:57.108730 | orchestrator | Saturday 27 September 2025 21:34:47 +0000 (0:00:00.272) 0:01:16.779 **** 2025-09-27 21:35:57.108739 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:35:57.108749 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:35:57.108758 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:35:57.108768 | orchestrator | 2025-09-27 21:35:57.108778 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-09-27 21:35:57.108787 | orchestrator | Saturday 27 September 2025 21:34:47 +0000 (0:00:00.269) 0:01:17.049 **** 2025-09-27 21:35:57.108797 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:35:57.108806 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:35:57.108816 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:35:57.108825 | orchestrator | 2025-09-27 21:35:57.108835 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-09-27 21:35:57.108845 | orchestrator | Saturday 27 September 2025 21:34:48 +0000 (0:00:00.284) 0:01:17.333 **** 2025-09-27 21:35:57.108854 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:35:57.108864 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:35:57.108873 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:35:57.108883 | orchestrator | 2025-09-27 21:35:57.108892 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-09-27 21:35:57.108902 | orchestrator | Saturday 27 September 2025 21:34:48 +0000 (0:00:00.451) 0:01:17.785 **** 2025-09-27 21:35:57.108912 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:35:57.108921 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:35:57.108931 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:35:57.108941 | orchestrator | 2025-09-27 21:35:57.108954 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-09-27 21:35:57.108964 | orchestrator | Saturday 27 September 2025 21:34:48 +0000 (0:00:00.286) 0:01:18.071 **** 2025-09-27 21:35:57.108974 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:35:57.108983 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:35:57.108993 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:35:57.109002 | orchestrator | 2025-09-27 21:35:57.109012 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-09-27 21:35:57.109022 | orchestrator | Saturday 27 September 2025 21:34:49 +0000 (0:00:00.287) 0:01:18.359 **** 2025-09-27 21:35:57.109031 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:35:57.109041 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:35:57.109056 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:35:57.109066 | orchestrator | 2025-09-27 21:35:57.109076 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-27 21:35:57.109085 | orchestrator | Saturday 27 September 2025 21:34:49 +0000 (0:00:00.278) 0:01:18.637 **** 2025-09-27 21:35:57.109095 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:35:57.109105 | orchestrator | 2025-09-27 21:35:57.109115 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-09-27 21:35:57.109124 | orchestrator | Saturday 27 September 2025 21:34:50 +0000 (0:00:00.742) 0:01:19.380 **** 2025-09-27 21:35:57.109134 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:35:57.109144 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:35:57.109153 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:35:57.109163 | orchestrator | 2025-09-27 21:35:57.109172 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-09-27 21:35:57.109182 | orchestrator | Saturday 27 September 2025 21:34:50 +0000 (0:00:00.425) 0:01:19.805 **** 2025-09-27 21:35:57.109192 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:35:57.109201 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:35:57.109211 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:35:57.109220 | orchestrator | 2025-09-27 21:35:57.109230 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-09-27 21:35:57.109240 | orchestrator | Saturday 27 September 2025 21:34:50 +0000 (0:00:00.445) 0:01:20.251 **** 2025-09-27 21:35:57.109255 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:35:57.109265 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:35:57.109274 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:35:57.109284 | orchestrator | 2025-09-27 21:35:57.109294 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-09-27 21:35:57.109303 | orchestrator | Saturday 27 September 2025 21:34:51 +0000 (0:00:00.492) 0:01:20.743 **** 2025-09-27 21:35:57.109313 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:35:57.109322 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:35:57.109332 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:35:57.109342 | orchestrator | 2025-09-27 21:35:57.109351 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-09-27 21:35:57.109361 | orchestrator | Saturday 27 September 2025 21:34:51 +0000 (0:00:00.328) 0:01:21.072 **** 2025-09-27 21:35:57.109370 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:35:57.109397 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:35:57.109407 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:35:57.109416 | orchestrator | 2025-09-27 21:35:57.109426 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-09-27 21:35:57.109436 | orchestrator | Saturday 27 September 2025 21:34:52 +0000 (0:00:00.330) 0:01:21.403 **** 2025-09-27 21:35:57.109445 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:35:57.109455 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:35:57.109465 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:35:57.109474 | orchestrator | 2025-09-27 21:35:57.109484 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-09-27 21:35:57.109493 | orchestrator | Saturday 27 September 2025 21:34:52 +0000 (0:00:00.298) 0:01:21.702 **** 2025-09-27 21:35:57.109503 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:35:57.109512 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:35:57.109522 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:35:57.109531 | orchestrator | 2025-09-27 21:35:57.109541 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-09-27 21:35:57.109551 | orchestrator | Saturday 27 September 2025 21:34:52 +0000 (0:00:00.473) 0:01:22.175 **** 2025-09-27 21:35:57.109560 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:35:57.109570 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:35:57.109580 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:35:57.109589 | orchestrator | 2025-09-27 21:35:57.109599 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-27 21:35:57.109609 | orchestrator | Saturday 27 September 2025 21:34:53 +0000 (0:00:00.304) 0:01:22.480 **** 2025-09-27 21:35:57.109619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.109631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.109649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.109665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.109683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.109693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.109703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.109713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.109723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.109733 | orchestrator | 2025-09-27 21:35:57.109743 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-27 21:35:57.109753 | orchestrator | Saturday 27 September 2025 21:34:54 +0000 (0:00:01.688) 0:01:24.168 **** 2025-09-27 21:35:57.109763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.109773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.109783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.109797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.109818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.109829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.109839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.109849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.109858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.109868 | orchestrator | 2025-09-27 21:35:57.109878 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-27 21:35:57.109888 | orchestrator | Saturday 27 September 2025 21:34:59 +0000 (0:00:04.520) 0:01:28.688 **** 2025-09-27 21:35:57.109898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.109908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.109918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.109928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.109946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.109961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.109971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.109981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.109991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.110001 | orchestrator | 2025-09-27 21:35:57.110011 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-27 21:35:57.110052 | orchestrator | Saturday 27 September 2025 21:35:01 +0000 (0:00:02.329) 0:01:31.018 **** 2025-09-27 21:35:57.110062 | orchestrator | 2025-09-27 21:35:57.110072 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-27 21:35:57.110082 | orchestrator | Saturday 27 September 2025 21:35:01 +0000 (0:00:00.250) 0:01:31.268 **** 2025-09-27 21:35:57.110091 | orchestrator | 2025-09-27 21:35:57.110101 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-27 21:35:57.110110 | orchestrator | Saturday 27 September 2025 21:35:02 +0000 (0:00:00.077) 0:01:31.345 **** 2025-09-27 21:35:57.110120 | orchestrator | 2025-09-27 21:35:57.110129 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-27 21:35:57.110139 | orchestrator | Saturday 27 September 2025 21:35:02 +0000 (0:00:00.067) 0:01:31.413 **** 2025-09-27 21:35:57.110148 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:35:57.110158 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:35:57.110168 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:35:57.110177 | orchestrator | 2025-09-27 21:35:57.110187 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-27 21:35:57.110197 | orchestrator | Saturday 27 September 2025 21:35:09 +0000 (0:00:07.480) 0:01:38.893 **** 2025-09-27 21:35:57.110206 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:35:57.110216 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:35:57.110225 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:35:57.110235 | orchestrator | 2025-09-27 21:35:57.110244 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-27 21:35:57.110260 | orchestrator | Saturday 27 September 2025 21:35:12 +0000 (0:00:02.507) 0:01:41.400 **** 2025-09-27 21:35:57.110270 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:35:57.110279 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:35:57.110289 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:35:57.110298 | orchestrator | 2025-09-27 21:35:57.110308 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-27 21:35:57.110318 | orchestrator | Saturday 27 September 2025 21:35:14 +0000 (0:00:02.894) 0:01:44.294 **** 2025-09-27 21:35:57.110327 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:35:57.110337 | orchestrator | 2025-09-27 21:35:57.110346 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-27 21:35:57.110356 | orchestrator | Saturday 27 September 2025 21:35:15 +0000 (0:00:00.341) 0:01:44.636 **** 2025-09-27 21:35:57.110365 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:35:57.110375 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:35:57.110429 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:35:57.110439 | orchestrator | 2025-09-27 21:35:57.110449 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-27 21:35:57.110458 | orchestrator | Saturday 27 September 2025 21:35:16 +0000 (0:00:00.995) 0:01:45.632 **** 2025-09-27 21:35:57.110468 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:35:57.110477 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:35:57.110487 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:35:57.110497 | orchestrator | 2025-09-27 21:35:57.110506 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-27 21:35:57.110516 | orchestrator | Saturday 27 September 2025 21:35:16 +0000 (0:00:00.689) 0:01:46.322 **** 2025-09-27 21:35:57.110526 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:35:57.110535 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:35:57.110550 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:35:57.110560 | orchestrator | 2025-09-27 21:35:57.110569 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-27 21:35:57.110579 | orchestrator | Saturday 27 September 2025 21:35:17 +0000 (0:00:00.783) 0:01:47.106 **** 2025-09-27 21:35:57.110589 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:35:57.110598 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:35:57.110608 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:35:57.110617 | orchestrator | 2025-09-27 21:35:57.110627 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-27 21:35:57.110637 | orchestrator | Saturday 27 September 2025 21:35:18 +0000 (0:00:00.663) 0:01:47.769 **** 2025-09-27 21:35:57.110647 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:35:57.110662 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:35:57.110672 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:35:57.110682 | orchestrator | 2025-09-27 21:35:57.110691 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-27 21:35:57.110701 | orchestrator | Saturday 27 September 2025 21:35:19 +0000 (0:00:01.102) 0:01:48.871 **** 2025-09-27 21:35:57.110711 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:35:57.110721 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:35:57.110730 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:35:57.110740 | orchestrator | 2025-09-27 21:35:57.110750 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-09-27 21:35:57.110759 | orchestrator | Saturday 27 September 2025 21:35:20 +0000 (0:00:00.752) 0:01:49.624 **** 2025-09-27 21:35:57.110769 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:35:57.110778 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:35:57.110788 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:35:57.110797 | orchestrator | 2025-09-27 21:35:57.110807 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-27 21:35:57.110817 | orchestrator | Saturday 27 September 2025 21:35:20 +0000 (0:00:00.303) 0:01:49.927 **** 2025-09-27 21:35:57.110827 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.110843 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.110853 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.110863 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.110874 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.110884 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.110894 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.110908 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.110924 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.110935 | orchestrator | 2025-09-27 21:35:57.110944 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-27 21:35:57.110954 | orchestrator | Saturday 27 September 2025 21:35:22 +0000 (0:00:01.407) 0:01:51.334 **** 2025-09-27 21:35:57.110964 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.110980 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.110989 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.110999 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.111010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.111019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.111029 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.111039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.111053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.111063 | orchestrator | 2025-09-27 21:35:57.111073 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-27 21:35:57.111083 | orchestrator | Saturday 27 September 2025 21:35:26 +0000 (0:00:04.377) 0:01:55.712 **** 2025-09-27 21:35:57.111098 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.111115 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.111125 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.111135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.111145 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.111155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.111164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.111174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.111184 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:35:57.111194 | orchestrator | 2025-09-27 21:35:57.111204 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-27 21:35:57.111214 | orchestrator | Saturday 27 September 2025 21:35:29 +0000 (0:00:03.365) 0:01:59.077 **** 2025-09-27 21:35:57.111223 | orchestrator | 2025-09-27 21:35:57.111237 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-27 21:35:57.111247 | orchestrator | Saturday 27 September 2025 21:35:29 +0000 (0:00:00.072) 0:01:59.150 **** 2025-09-27 21:35:57.111257 | orchestrator | 2025-09-27 21:35:57.111267 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-27 21:35:57.111282 | orchestrator | Saturday 27 September 2025 21:35:29 +0000 (0:00:00.070) 0:01:59.220 **** 2025-09-27 21:35:57.111291 | orchestrator | 2025-09-27 21:35:57.111301 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-27 21:35:57.111311 | orchestrator | Saturday 27 September 2025 21:35:29 +0000 (0:00:00.067) 0:01:59.288 **** 2025-09-27 21:35:57.111320 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:35:57.111330 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:35:57.111339 | orchestrator | 2025-09-27 21:35:57.111353 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-27 21:35:57.111363 | orchestrator | Saturday 27 September 2025 21:35:36 +0000 (0:00:06.430) 0:02:05.719 **** 2025-09-27 21:35:57.111373 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:35:57.111397 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:35:57.111407 | orchestrator | 2025-09-27 21:35:57.111416 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-27 21:35:57.111426 | orchestrator | Saturday 27 September 2025 21:35:42 +0000 (0:00:06.351) 0:02:12.071 **** 2025-09-27 21:35:57.111436 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:35:57.111445 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:35:57.111455 | orchestrator | 2025-09-27 21:35:57.111465 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-27 21:35:57.111474 | orchestrator | Saturday 27 September 2025 21:35:49 +0000 (0:00:06.549) 0:02:18.620 **** 2025-09-27 21:35:57.111484 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:35:57.111493 | orchestrator | 2025-09-27 21:35:57.111503 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-27 21:35:57.111513 | orchestrator | Saturday 27 September 2025 21:35:49 +0000 (0:00:00.116) 0:02:18.737 **** 2025-09-27 21:35:57.111522 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:35:57.111532 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:35:57.111542 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:35:57.111551 | orchestrator | 2025-09-27 21:35:57.111561 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-27 21:35:57.111570 | orchestrator | Saturday 27 September 2025 21:35:50 +0000 (0:00:00.789) 0:02:19.526 **** 2025-09-27 21:35:57.111580 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:35:57.111590 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:35:57.111599 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:35:57.111609 | orchestrator | 2025-09-27 21:35:57.111618 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-27 21:35:57.111628 | orchestrator | Saturday 27 September 2025 21:35:50 +0000 (0:00:00.725) 0:02:20.252 **** 2025-09-27 21:35:57.111638 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:35:57.111647 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:35:57.111657 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:35:57.111666 | orchestrator | 2025-09-27 21:35:57.111676 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-27 21:35:57.111686 | orchestrator | Saturday 27 September 2025 21:35:51 +0000 (0:00:00.808) 0:02:21.061 **** 2025-09-27 21:35:57.111695 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:35:57.111705 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:35:57.111715 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:35:57.111724 | orchestrator | 2025-09-27 21:35:57.111734 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-27 21:35:57.111744 | orchestrator | Saturday 27 September 2025 21:35:52 +0000 (0:00:00.864) 0:02:21.925 **** 2025-09-27 21:35:57.111753 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:35:57.111763 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:35:57.111772 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:35:57.111782 | orchestrator | 2025-09-27 21:35:57.111792 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-27 21:35:57.111801 | orchestrator | Saturday 27 September 2025 21:35:53 +0000 (0:00:00.734) 0:02:22.660 **** 2025-09-27 21:35:57.111820 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:35:57.111830 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:35:57.111840 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:35:57.111849 | orchestrator | 2025-09-27 21:35:57.111859 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:35:57.111869 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-27 21:35:57.111879 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-27 21:35:57.111889 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-27 21:35:57.111899 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:35:57.111909 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:35:57.111918 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:35:57.111928 | orchestrator | 2025-09-27 21:35:57.111937 | orchestrator | 2025-09-27 21:35:57.111947 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:35:57.111957 | orchestrator | Saturday 27 September 2025 21:35:54 +0000 (0:00:00.912) 0:02:23.572 **** 2025-09-27 21:35:57.111966 | orchestrator | =============================================================================== 2025-09-27 21:35:57.111980 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 29.84s 2025-09-27 21:35:57.111990 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 22.35s 2025-09-27 21:35:57.111999 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.91s 2025-09-27 21:35:57.112009 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 9.44s 2025-09-27 21:35:57.112018 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.86s 2025-09-27 21:35:57.112028 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.52s 2025-09-27 21:35:57.112037 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.38s 2025-09-27 21:35:57.112052 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.39s 2025-09-27 21:35:57.112062 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.37s 2025-09-27 21:35:57.112071 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.68s 2025-09-27 21:35:57.112081 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.33s 2025-09-27 21:35:57.112090 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.92s 2025-09-27 21:35:57.112100 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.69s 2025-09-27 21:35:57.112110 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.64s 2025-09-27 21:35:57.112119 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.64s 2025-09-27 21:35:57.112128 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.61s 2025-09-27 21:35:57.112138 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.46s 2025-09-27 21:35:57.112147 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.41s 2025-09-27 21:35:57.112157 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.31s 2025-09-27 21:35:57.112166 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.24s 2025-09-27 21:35:57.112176 | orchestrator | 2025-09-27 21:35:57 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:35:57.112193 | orchestrator | 2025-09-27 21:35:57 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:36:00.163105 | orchestrator | 2025-09-27 21:36:00 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:36:00.167749 | orchestrator | 2025-09-27 21:36:00 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:36:00.167900 | orchestrator | 2025-09-27 21:36:00 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:36:03.214881 | orchestrator | 2025-09-27 21:36:03 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:36:03.216585 | orchestrator | 2025-09-27 21:36:03 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:36:03.216615 | orchestrator | 2025-09-27 21:36:03 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:36:06.263864 | orchestrator | 2025-09-27 21:36:06 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:36:06.264578 | orchestrator | 2025-09-27 21:36:06 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:36:06.264710 | orchestrator | 2025-09-27 21:36:06 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:36:09.311342 | orchestrator | 2025-09-27 21:36:09 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:36:09.311895 | orchestrator | 2025-09-27 21:36:09 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:36:09.311923 | orchestrator | 2025-09-27 21:36:09 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:36:12.369888 | orchestrator | 2025-09-27 21:36:12 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:36:12.370287 | orchestrator | 2025-09-27 21:36:12 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:36:12.370319 | orchestrator | 2025-09-27 21:36:12 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:36:15.408725 | orchestrator | 2025-09-27 21:36:15 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:36:15.410681 | orchestrator | 2025-09-27 21:36:15 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:36:15.410718 | orchestrator | 2025-09-27 21:36:15 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:36:18.448506 | orchestrator | 2025-09-27 21:36:18 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:36:18.448729 | orchestrator | 2025-09-27 21:36:18 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:36:18.448855 | orchestrator | 2025-09-27 21:36:18 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:36:21.495216 | orchestrator | 2025-09-27 21:36:21 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:36:21.496277 | orchestrator | 2025-09-27 21:36:21 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:36:21.496328 | orchestrator | 2025-09-27 21:36:21 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:36:24.535732 | orchestrator | 2025-09-27 21:36:24 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:36:24.536562 | orchestrator | 2025-09-27 21:36:24 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:36:24.536669 | orchestrator | 2025-09-27 21:36:24 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:36:27.577001 | orchestrator | 2025-09-27 21:36:27 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:36:27.579285 | orchestrator | 2025-09-27 21:36:27 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:36:27.579666 | orchestrator | 2025-09-27 21:36:27 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:36:30.628891 | orchestrator | 2025-09-27 21:36:30 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:36:30.629019 | orchestrator | 2025-09-27 21:36:30 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:36:30.629056 | orchestrator | 2025-09-27 21:36:30 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:36:33.666245 | orchestrator | 2025-09-27 21:36:33 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:36:33.668206 | orchestrator | 2025-09-27 21:36:33 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:36:33.668240 | orchestrator | 2025-09-27 21:36:33 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:36:36.703810 | orchestrator | 2025-09-27 21:36:36 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:36:36.704051 | orchestrator | 2025-09-27 21:36:36 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:36:36.704315 | orchestrator | 2025-09-27 21:36:36 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:36:39.743639 | orchestrator | 2025-09-27 21:36:39 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:36:39.745394 | orchestrator | 2025-09-27 21:36:39 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:36:39.745586 | orchestrator | 2025-09-27 21:36:39 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:36:42.788071 | orchestrator | 2025-09-27 21:36:42 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:36:42.789884 | orchestrator | 2025-09-27 21:36:42 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:36:42.790573 | orchestrator | 2025-09-27 21:36:42 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:36:45.839945 | orchestrator | 2025-09-27 21:36:45 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:36:45.842917 | orchestrator | 2025-09-27 21:36:45 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:36:45.842946 | orchestrator | 2025-09-27 21:36:45 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:36:48.884894 | orchestrator | 2025-09-27 21:36:48 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:36:48.885305 | orchestrator | 2025-09-27 21:36:48 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:36:48.885373 | orchestrator | 2025-09-27 21:36:48 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:36:51.934721 | orchestrator | 2025-09-27 21:36:51 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:36:51.936432 | orchestrator | 2025-09-27 21:36:51 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:36:51.936667 | orchestrator | 2025-09-27 21:36:51 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:36:54.978502 | orchestrator | 2025-09-27 21:36:54 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:36:54.978625 | orchestrator | 2025-09-27 21:36:54 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:36:54.978641 | orchestrator | 2025-09-27 21:36:54 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:36:58.009264 | orchestrator | 2025-09-27 21:36:58 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:36:58.011124 | orchestrator | 2025-09-27 21:36:58 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:36:58.011159 | orchestrator | 2025-09-27 21:36:58 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:37:01.102684 | orchestrator | 2025-09-27 21:37:01 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:37:01.104652 | orchestrator | 2025-09-27 21:37:01 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:37:01.104683 | orchestrator | 2025-09-27 21:37:01 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:37:04.141987 | orchestrator | 2025-09-27 21:37:04 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:37:04.143792 | orchestrator | 2025-09-27 21:37:04 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:37:04.144007 | orchestrator | 2025-09-27 21:37:04 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:37:07.187033 | orchestrator | 2025-09-27 21:37:07 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:37:07.187363 | orchestrator | 2025-09-27 21:37:07 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:37:07.187409 | orchestrator | 2025-09-27 21:37:07 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:37:10.236108 | orchestrator | 2025-09-27 21:37:10 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:37:10.242218 | orchestrator | 2025-09-27 21:37:10 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:37:10.242339 | orchestrator | 2025-09-27 21:37:10 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:37:13.280493 | orchestrator | 2025-09-27 21:37:13 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:37:13.281595 | orchestrator | 2025-09-27 21:37:13 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:37:13.282107 | orchestrator | 2025-09-27 21:37:13 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:37:16.328614 | orchestrator | 2025-09-27 21:37:16 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:37:16.329701 | orchestrator | 2025-09-27 21:37:16 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:37:16.329994 | orchestrator | 2025-09-27 21:37:16 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:37:19.380004 | orchestrator | 2025-09-27 21:37:19 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:37:19.382451 | orchestrator | 2025-09-27 21:37:19 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:37:19.382883 | orchestrator | 2025-09-27 21:37:19 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:37:22.424942 | orchestrator | 2025-09-27 21:37:22 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:37:22.425045 | orchestrator | 2025-09-27 21:37:22 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:37:22.425071 | orchestrator | 2025-09-27 21:37:22 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:37:25.472372 | orchestrator | 2025-09-27 21:37:25 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:37:25.473489 | orchestrator | 2025-09-27 21:37:25 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:37:25.474599 | orchestrator | 2025-09-27 21:37:25 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:37:28.521572 | orchestrator | 2025-09-27 21:37:28 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:37:28.522938 | orchestrator | 2025-09-27 21:37:28 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:37:28.523062 | orchestrator | 2025-09-27 21:37:28 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:37:31.570407 | orchestrator | 2025-09-27 21:37:31 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:37:31.572815 | orchestrator | 2025-09-27 21:37:31 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:37:31.572851 | orchestrator | 2025-09-27 21:37:31 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:37:34.626766 | orchestrator | 2025-09-27 21:37:34 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:37:34.627963 | orchestrator | 2025-09-27 21:37:34 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:37:34.628104 | orchestrator | 2025-09-27 21:37:34 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:37:37.674607 | orchestrator | 2025-09-27 21:37:37 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:37:37.675030 | orchestrator | 2025-09-27 21:37:37 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:37:37.675169 | orchestrator | 2025-09-27 21:37:37 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:37:40.723418 | orchestrator | 2025-09-27 21:37:40 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:37:40.724347 | orchestrator | 2025-09-27 21:37:40 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:37:40.724647 | orchestrator | 2025-09-27 21:37:40 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:37:43.779239 | orchestrator | 2025-09-27 21:37:43 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:37:43.780906 | orchestrator | 2025-09-27 21:37:43 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:37:43.781347 | orchestrator | 2025-09-27 21:37:43 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:37:46.831218 | orchestrator | 2025-09-27 21:37:46 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:37:46.831640 | orchestrator | 2025-09-27 21:37:46 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:37:46.831673 | orchestrator | 2025-09-27 21:37:46 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:37:49.886795 | orchestrator | 2025-09-27 21:37:49 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:37:49.888301 | orchestrator | 2025-09-27 21:37:49 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:37:49.888346 | orchestrator | 2025-09-27 21:37:49 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:37:52.934885 | orchestrator | 2025-09-27 21:37:52 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:37:52.935398 | orchestrator | 2025-09-27 21:37:52 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:37:52.935439 | orchestrator | 2025-09-27 21:37:52 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:37:55.975772 | orchestrator | 2025-09-27 21:37:55 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:37:55.975921 | orchestrator | 2025-09-27 21:37:55 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:37:55.975938 | orchestrator | 2025-09-27 21:37:55 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:37:59.020457 | orchestrator | 2025-09-27 21:37:59 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:37:59.020953 | orchestrator | 2025-09-27 21:37:59 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:37:59.020992 | orchestrator | 2025-09-27 21:37:59 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:38:02.057090 | orchestrator | 2025-09-27 21:38:02 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:38:02.057820 | orchestrator | 2025-09-27 21:38:02 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:38:02.057850 | orchestrator | 2025-09-27 21:38:02 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:38:05.094175 | orchestrator | 2025-09-27 21:38:05 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:38:05.095887 | orchestrator | 2025-09-27 21:38:05 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:38:05.096208 | orchestrator | 2025-09-27 21:38:05 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:38:08.144046 | orchestrator | 2025-09-27 21:38:08 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:38:08.145785 | orchestrator | 2025-09-27 21:38:08 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:38:08.145957 | orchestrator | 2025-09-27 21:38:08 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:38:11.191869 | orchestrator | 2025-09-27 21:38:11 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:38:11.193260 | orchestrator | 2025-09-27 21:38:11 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:38:11.193294 | orchestrator | 2025-09-27 21:38:11 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:38:14.234311 | orchestrator | 2025-09-27 21:38:14 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:38:14.235026 | orchestrator | 2025-09-27 21:38:14 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:38:14.235058 | orchestrator | 2025-09-27 21:38:14 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:38:17.289580 | orchestrator | 2025-09-27 21:38:17 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:38:17.291224 | orchestrator | 2025-09-27 21:38:17 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:38:17.291258 | orchestrator | 2025-09-27 21:38:17 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:38:20.338466 | orchestrator | 2025-09-27 21:38:20 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:38:20.343352 | orchestrator | 2025-09-27 21:38:20 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:38:20.343665 | orchestrator | 2025-09-27 21:38:20 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:38:23.386504 | orchestrator | 2025-09-27 21:38:23 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:38:23.387847 | orchestrator | 2025-09-27 21:38:23 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:38:23.387887 | orchestrator | 2025-09-27 21:38:23 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:38:26.435234 | orchestrator | 2025-09-27 21:38:26 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:38:26.435448 | orchestrator | 2025-09-27 21:38:26 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:38:26.435718 | orchestrator | 2025-09-27 21:38:26 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:38:29.469516 | orchestrator | 2025-09-27 21:38:29 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:38:29.471020 | orchestrator | 2025-09-27 21:38:29 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:38:29.471054 | orchestrator | 2025-09-27 21:38:29 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:38:32.504721 | orchestrator | 2025-09-27 21:38:32 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:38:32.506765 | orchestrator | 2025-09-27 21:38:32 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:38:32.507231 | orchestrator | 2025-09-27 21:38:32 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:38:35.550173 | orchestrator | 2025-09-27 21:38:35 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:38:35.551343 | orchestrator | 2025-09-27 21:38:35 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:38:35.551382 | orchestrator | 2025-09-27 21:38:35 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:38:38.577230 | orchestrator | 2025-09-27 21:38:38 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state STARTED 2025-09-27 21:38:38.578365 | orchestrator | 2025-09-27 21:38:38 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:38:38.578409 | orchestrator | 2025-09-27 21:38:38 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:38:41.607503 | orchestrator | 2025-09-27 21:38:41 | INFO  | Task d43d7209-19c8-49b5-b56a-762f56e97248 is in state SUCCESS 2025-09-27 21:38:41.608741 | orchestrator | 2025-09-27 21:38:41.608775 | orchestrator | 2025-09-27 21:38:41.608787 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 21:38:41.608799 | orchestrator | 2025-09-27 21:38:41.608811 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 21:38:41.608889 | orchestrator | Saturday 27 September 2025 21:32:31 +0000 (0:00:00.373) 0:00:00.373 **** 2025-09-27 21:38:41.608902 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:38:41.608913 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:38:41.608925 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:38:41.608936 | orchestrator | 2025-09-27 21:38:41.608947 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 21:38:41.608959 | orchestrator | Saturday 27 September 2025 21:32:31 +0000 (0:00:00.329) 0:00:00.703 **** 2025-09-27 21:38:41.608971 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-09-27 21:38:41.608982 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-09-27 21:38:41.608993 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-09-27 21:38:41.609004 | orchestrator | 2025-09-27 21:38:41.609015 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-09-27 21:38:41.609026 | orchestrator | 2025-09-27 21:38:41.609050 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-27 21:38:41.609062 | orchestrator | Saturday 27 September 2025 21:32:32 +0000 (0:00:00.571) 0:00:01.275 **** 2025-09-27 21:38:41.609074 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:38:41.609085 | orchestrator | 2025-09-27 21:38:41.609096 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-09-27 21:38:41.610278 | orchestrator | Saturday 27 September 2025 21:32:33 +0000 (0:00:00.802) 0:00:02.077 **** 2025-09-27 21:38:41.610351 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:38:41.610365 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:38:41.610377 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:38:41.610388 | orchestrator | 2025-09-27 21:38:41.610399 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-27 21:38:41.610410 | orchestrator | Saturday 27 September 2025 21:32:34 +0000 (0:00:00.977) 0:00:03.054 **** 2025-09-27 21:38:41.610421 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:38:41.610432 | orchestrator | 2025-09-27 21:38:41.610442 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-09-27 21:38:41.610453 | orchestrator | Saturday 27 September 2025 21:32:35 +0000 (0:00:01.042) 0:00:04.097 **** 2025-09-27 21:38:41.610464 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:38:41.610474 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:38:41.610485 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:38:41.610496 | orchestrator | 2025-09-27 21:38:41.610507 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-09-27 21:38:41.610518 | orchestrator | Saturday 27 September 2025 21:32:35 +0000 (0:00:00.733) 0:00:04.831 **** 2025-09-27 21:38:41.610528 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-27 21:38:41.610540 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-27 21:38:41.610551 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-27 21:38:41.610561 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-27 21:38:41.610572 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-27 21:38:41.610583 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-27 21:38:41.610593 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-27 21:38:41.610604 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-27 21:38:41.610615 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-27 21:38:41.610625 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-27 21:38:41.610636 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-27 21:38:41.610647 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-27 21:38:41.610657 | orchestrator | 2025-09-27 21:38:41.610668 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-27 21:38:41.610679 | orchestrator | Saturday 27 September 2025 21:32:40 +0000 (0:00:04.289) 0:00:09.120 **** 2025-09-27 21:38:41.610690 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-27 21:38:41.610700 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-27 21:38:41.610711 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-27 21:38:41.610722 | orchestrator | 2025-09-27 21:38:41.610733 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-27 21:38:41.610743 | orchestrator | Saturday 27 September 2025 21:32:41 +0000 (0:00:00.946) 0:00:10.066 **** 2025-09-27 21:38:41.610754 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-27 21:38:41.610765 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-27 21:38:41.610777 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-27 21:38:41.610787 | orchestrator | 2025-09-27 21:38:41.610798 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-27 21:38:41.610809 | orchestrator | Saturday 27 September 2025 21:32:42 +0000 (0:00:01.323) 0:00:11.390 **** 2025-09-27 21:38:41.610820 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-09-27 21:38:41.610839 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.610867 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-09-27 21:38:41.610879 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.610889 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-09-27 21:38:41.610900 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.610911 | orchestrator | 2025-09-27 21:38:41.610921 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-09-27 21:38:41.610932 | orchestrator | Saturday 27 September 2025 21:32:43 +0000 (0:00:00.640) 0:00:12.030 **** 2025-09-27 21:38:41.610958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-27 21:38:41.610976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-27 21:38:41.610988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-27 21:38:41.611000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-27 21:38:41.611011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-27 21:38:41.611031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-27 21:38:41.611050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-27 21:38:41.611067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-27 21:38:41.611078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-27 21:38:41.611090 | orchestrator | 2025-09-27 21:38:41.611101 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-09-27 21:38:41.611112 | orchestrator | Saturday 27 September 2025 21:32:45 +0000 (0:00:01.936) 0:00:13.967 **** 2025-09-27 21:38:41.611123 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.611133 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.611144 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.611155 | orchestrator | 2025-09-27 21:38:41.611166 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-09-27 21:38:41.611196 | orchestrator | Saturday 27 September 2025 21:32:46 +0000 (0:00:01.147) 0:00:15.114 **** 2025-09-27 21:38:41.611207 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-09-27 21:38:41.611219 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-09-27 21:38:41.611229 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-09-27 21:38:41.611240 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-09-27 21:38:41.611251 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-09-27 21:38:41.611262 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-09-27 21:38:41.611273 | orchestrator | 2025-09-27 21:38:41.611284 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-09-27 21:38:41.611295 | orchestrator | Saturday 27 September 2025 21:32:48 +0000 (0:00:01.771) 0:00:16.886 **** 2025-09-27 21:38:41.611306 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.611316 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.611327 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.611338 | orchestrator | 2025-09-27 21:38:41.611348 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-09-27 21:38:41.611359 | orchestrator | Saturday 27 September 2025 21:32:49 +0000 (0:00:01.189) 0:00:18.076 **** 2025-09-27 21:38:41.611370 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:38:41.611388 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:38:41.611399 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:38:41.611409 | orchestrator | 2025-09-27 21:38:41.611420 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-09-27 21:38:41.611431 | orchestrator | Saturday 27 September 2025 21:32:51 +0000 (0:00:01.802) 0:00:19.878 **** 2025-09-27 21:38:41.611443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-27 21:38:41.611461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 21:38:41.611474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 21:38:41.611490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5d9b03dc88689cd178e2480dafc6ea80d34e7552', '__omit_place_holder__5d9b03dc88689cd178e2480dafc6ea80d34e7552'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-27 21:38:41.611502 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.611513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-27 21:38:41.611525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 21:38:41.611542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 21:38:41.611554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5d9b03dc88689cd178e2480dafc6ea80d34e7552', '__omit_place_holder__5d9b03dc88689cd178e2480dafc6ea80d34e7552'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-27 21:38:41.611565 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.611584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-27 21:38:41.611601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 21:38:41.611612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 21:38:41.611624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5d9b03dc88689cd178e2480dafc6ea80d34e7552', '__omit_place_holder__5d9b03dc88689cd178e2480dafc6ea80d34e7552'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-27 21:38:41.611640 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.611651 | orchestrator | 2025-09-27 21:38:41.611662 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-09-27 21:38:41.611673 | orchestrator | Saturday 27 September 2025 21:32:51 +0000 (0:00:00.686) 0:00:20.564 **** 2025-09-27 21:38:41.611684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-27 21:38:41.611696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-27 21:38:41.611714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-27 21:38:41.611731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-27 21:38:41.611742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 21:38:41.611754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5d9b03dc88689cd178e2480dafc6ea80d34e7552', '__omit_place_holder__5d9b03dc88689cd178e2480dafc6ea80d34e7552'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-27 21:38:41.611773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-27 21:38:41.611784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 21:38:41.611796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5d9b03dc88689cd178e2480dafc6ea80d34e7552', '__omit_place_holder__5d9b03dc88689cd178e2480dafc6ea80d34e7552'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-27 21:38:41.611813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-27 21:38:41.611829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 21:38:41.611841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5d9b03dc88689cd178e2480dafc6ea80d34e7552', '__omit_place_holder__5d9b03dc88689cd178e2480dafc6ea80d34e7552'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-27 21:38:41.611852 | orchestrator | 2025-09-27 21:38:41.611873 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-09-27 21:38:41.611885 | orchestrator | Saturday 27 September 2025 21:32:54 +0000 (0:00:03.073) 0:00:23.638 **** 2025-09-27 21:38:41.611896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-27 21:38:41.611908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-27 21:38:41.611919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-27 21:38:41.611943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-27 21:38:41.611964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-27 21:38:41.611976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-27 21:38:41.611993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-27 21:38:41.612005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-27 21:38:41.612017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-27 21:38:41.612028 | orchestrator | 2025-09-27 21:38:41.612038 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-09-27 21:38:41.612049 | orchestrator | Saturday 27 September 2025 21:32:58 +0000 (0:00:04.026) 0:00:27.665 **** 2025-09-27 21:38:41.612061 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-27 21:38:41.612072 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-27 21:38:41.612082 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-27 21:38:41.612093 | orchestrator | 2025-09-27 21:38:41.612104 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-09-27 21:38:41.612115 | orchestrator | Saturday 27 September 2025 21:33:01 +0000 (0:00:02.704) 0:00:30.369 **** 2025-09-27 21:38:41.612126 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-27 21:38:41.612137 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-27 21:38:41.612148 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-27 21:38:41.612159 | orchestrator | 2025-09-27 21:38:41.612196 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-09-27 21:38:41.612208 | orchestrator | Saturday 27 September 2025 21:33:06 +0000 (0:00:05.168) 0:00:35.537 **** 2025-09-27 21:38:41.612220 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.612230 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.612241 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.612252 | orchestrator | 2025-09-27 21:38:41.612276 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-09-27 21:38:41.612297 | orchestrator | Saturday 27 September 2025 21:33:07 +0000 (0:00:00.604) 0:00:36.142 **** 2025-09-27 21:38:41.612308 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-27 21:38:41.612319 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-27 21:38:41.612343 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-27 21:38:41.612354 | orchestrator | 2025-09-27 21:38:41.612365 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-09-27 21:38:41.612376 | orchestrator | Saturday 27 September 2025 21:33:10 +0000 (0:00:02.932) 0:00:39.074 **** 2025-09-27 21:38:41.612387 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-27 21:38:41.612398 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-27 21:38:41.612409 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-27 21:38:41.612420 | orchestrator | 2025-09-27 21:38:41.612430 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-09-27 21:38:41.612441 | orchestrator | Saturday 27 September 2025 21:33:13 +0000 (0:00:02.823) 0:00:41.898 **** 2025-09-27 21:38:41.612452 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-09-27 21:38:41.612463 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-09-27 21:38:41.612474 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-09-27 21:38:41.612485 | orchestrator | 2025-09-27 21:38:41.612496 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-09-27 21:38:41.612506 | orchestrator | Saturday 27 September 2025 21:33:14 +0000 (0:00:01.658) 0:00:43.557 **** 2025-09-27 21:38:41.612517 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-09-27 21:38:41.612529 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-09-27 21:38:41.612540 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-09-27 21:38:41.612550 | orchestrator | 2025-09-27 21:38:41.612561 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-27 21:38:41.612572 | orchestrator | Saturday 27 September 2025 21:33:16 +0000 (0:00:01.718) 0:00:45.275 **** 2025-09-27 21:38:41.612583 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:38:41.612594 | orchestrator | 2025-09-27 21:38:41.612605 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-09-27 21:38:41.612615 | orchestrator | Saturday 27 September 2025 21:33:17 +0000 (0:00:00.620) 0:00:45.896 **** 2025-09-27 21:38:41.612627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-27 21:38:41.612638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-27 21:38:41.612655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-27 21:38:41.612679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-27 21:38:41.612691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-27 21:38:41.612702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-27 21:38:41.612713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-27 21:38:41.612724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-27 21:38:41.612736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-27 21:38:41.612747 | orchestrator | 2025-09-27 21:38:41.612758 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-09-27 21:38:41.612775 | orchestrator | Saturday 27 September 2025 21:33:21 +0000 (0:00:04.373) 0:00:50.270 **** 2025-09-27 21:38:41.612794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-27 21:38:41.612811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 21:38:41.612822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 21:38:41.612834 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.612845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-27 21:38:41.612857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 21:38:41.612868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 21:38:41.612879 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.612890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-27 21:38:41.612914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 21:38:41.612931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 21:38:41.612942 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.612954 | orchestrator | 2025-09-27 21:38:41.612965 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-09-27 21:38:41.612976 | orchestrator | Saturday 27 September 2025 21:33:22 +0000 (0:00:00.706) 0:00:50.976 **** 2025-09-27 21:38:41.612987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-27 21:38:41.612999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 21:38:41.613010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 21:38:41.613021 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.613032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-27 21:38:41.613055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 21:38:41.613071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 21:38:41.613083 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.613094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-27 21:38:41.613106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 21:38:41.613117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 21:38:41.613128 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.613139 | orchestrator | 2025-09-27 21:38:41.613150 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-27 21:38:41.613161 | orchestrator | Saturday 27 September 2025 21:33:23 +0000 (0:00:00.929) 0:00:51.905 **** 2025-09-27 21:38:41.613194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-27 21:38:41.613213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 21:38:41.613224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 21:38:41.613236 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.613252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-27 21:38:41.613264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 21:38:41.613275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 21:38:41.613286 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.613297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-27 21:38:41.613314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 21:38:41.613331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 21:38:41.613342 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.613353 | orchestrator | 2025-09-27 21:38:41.613364 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-27 21:38:41.613375 | orchestrator | Saturday 27 September 2025 21:33:23 +0000 (0:00:00.840) 0:00:52.745 **** 2025-09-27 21:38:41.613386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-27 21:38:41.613398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 21:38:41.613409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 21:38:41.613421 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.613459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-27 21:38:41.613479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 21:38:41.613490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 21:38:41.613502 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.613518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-27 21:38:41.613535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 21:38:41.613546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 21:38:41.613558 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.613569 | orchestrator | 2025-09-27 21:38:41.613580 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-27 21:38:41.613591 | orchestrator | Saturday 27 September 2025 21:33:24 +0000 (0:00:00.511) 0:00:53.257 **** 2025-09-27 21:38:41.613602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-27 21:38:41.613623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 21:38:41.613634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 21:38:41.613645 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.613662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-27 21:38:41.613674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 21:38:41.613690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 21:38:41.613701 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.613712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-27 21:38:41.613730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 21:38:41.613742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 21:38:41.613753 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.613764 | orchestrator | 2025-09-27 21:38:41.613775 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-09-27 21:38:41.613786 | orchestrator | Saturday 27 September 2025 21:33:25 +0000 (0:00:00.637) 0:00:53.894 **** 2025-09-27 21:38:41.613797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-27 21:38:41.613814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 21:38:41.613831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 21:38:41.613842 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.613853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-27 21:38:41.613871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 21:38:41.613883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 21:38:41.613894 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.613905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-27 21:38:41.613921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 21:38:41.613933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 21:38:41.613944 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.613954 | orchestrator | 2025-09-27 21:38:41.613965 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-09-27 21:38:41.613976 | orchestrator | Saturday 27 September 2025 21:33:25 +0000 (0:00:00.798) 0:00:54.693 **** 2025-09-27 21:38:41.613992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-27 21:38:41.614009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 21:38:41.614057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 21:38:41.614069 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.614080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-27 21:38:41.614092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 21:38:41.614111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 21:38:41.614123 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.614139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-27 21:38:41.614157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 21:38:41.614169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 21:38:41.614209 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.614221 | orchestrator | 2025-09-27 21:38:41.614232 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-09-27 21:38:41.614243 | orchestrator | Saturday 27 September 2025 21:33:26 +0000 (0:00:00.725) 0:00:55.419 **** 2025-09-27 21:38:41.614254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-27 21:38:41.614266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 21:38:41.614277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 21:38:41.614289 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.614306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-27 21:38:41.614329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 21:38:41.614341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 21:38:41.614352 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.614363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-27 21:38:41.614374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-27 21:38:41.614386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-27 21:38:41.614397 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.614408 | orchestrator | 2025-09-27 21:38:41.614419 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-09-27 21:38:41.614430 | orchestrator | Saturday 27 September 2025 21:33:27 +0000 (0:00:00.938) 0:00:56.357 **** 2025-09-27 21:38:41.614440 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-27 21:38:41.614451 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-27 21:38:41.614468 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-27 21:38:41.614479 | orchestrator | 2025-09-27 21:38:41.614490 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-09-27 21:38:41.614507 | orchestrator | Saturday 27 September 2025 21:33:29 +0000 (0:00:02.132) 0:00:58.489 **** 2025-09-27 21:38:41.614518 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-27 21:38:41.614529 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-27 21:38:41.614540 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-27 21:38:41.614551 | orchestrator | 2025-09-27 21:38:41.614562 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-09-27 21:38:41.614573 | orchestrator | Saturday 27 September 2025 21:33:31 +0000 (0:00:01.744) 0:01:00.234 **** 2025-09-27 21:38:41.614584 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-27 21:38:41.614603 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-27 21:38:41.614614 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-27 21:38:41.614625 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-27 21:38:41.614636 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.614647 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-27 21:38:41.614658 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.614669 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-27 21:38:41.614680 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.614690 | orchestrator | 2025-09-27 21:38:41.614701 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-09-27 21:38:41.614712 | orchestrator | Saturday 27 September 2025 21:33:33 +0000 (0:00:01.759) 0:01:01.993 **** 2025-09-27 21:38:41.614724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-27 21:38:41.614735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-27 21:38:41.614747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-27 21:38:41.614764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-27 21:38:41.614781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-27 21:38:41.614797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-27 21:38:41.614809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-27 21:38:41.614820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-27 21:38:41.614832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-27 21:38:41.614843 | orchestrator | 2025-09-27 21:38:41.614854 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-09-27 21:38:41.614865 | orchestrator | Saturday 27 September 2025 21:33:36 +0000 (0:00:02.998) 0:01:04.992 **** 2025-09-27 21:38:41.614875 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:38:41.614886 | orchestrator | 2025-09-27 21:38:41.614897 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-09-27 21:38:41.614908 | orchestrator | Saturday 27 September 2025 21:33:37 +0000 (0:00:01.351) 0:01:06.343 **** 2025-09-27 21:38:41.614926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-27 21:38:41.614949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-27 21:38:41.614962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-27 21:38:41.614974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-27 21:38:41.614985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.614997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.615014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.615037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.615054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-27 21:38:41.615065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-27 21:38:41.615077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.615088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.615099 | orchestrator | 2025-09-27 21:38:41.615116 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-09-27 21:38:41.615127 | orchestrator | Saturday 27 September 2025 21:33:41 +0000 (0:00:04.514) 0:01:10.857 **** 2025-09-27 21:38:41.615138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-27 21:38:41.615155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-27 21:38:41.615171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.615199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.615210 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.615221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-27 21:38:41.615233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-27 21:38:41.615250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.615261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.615272 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.615294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-27 21:38:41.615307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-27 21:38:41.615318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.615329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.615346 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.615357 | orchestrator | 2025-09-27 21:38:41.615368 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-09-27 21:38:41.615379 | orchestrator | Saturday 27 September 2025 21:33:43 +0000 (0:00:01.219) 0:01:12.077 **** 2025-09-27 21:38:41.615390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-27 21:38:41.615402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-27 21:38:41.615414 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.615425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-27 21:38:41.615436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-27 21:38:41.615447 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.615458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-27 21:38:41.615469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-27 21:38:41.615480 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.615491 | orchestrator | 2025-09-27 21:38:41.615507 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-09-27 21:38:41.615518 | orchestrator | Saturday 27 September 2025 21:33:44 +0000 (0:00:01.599) 0:01:13.677 **** 2025-09-27 21:38:41.615529 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.615540 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.615550 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.615561 | orchestrator | 2025-09-27 21:38:41.615572 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-09-27 21:38:41.615583 | orchestrator | Saturday 27 September 2025 21:33:46 +0000 (0:00:01.445) 0:01:15.123 **** 2025-09-27 21:38:41.615593 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.615604 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.615615 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.615626 | orchestrator | 2025-09-27 21:38:41.615636 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-09-27 21:38:41.615647 | orchestrator | Saturday 27 September 2025 21:33:49 +0000 (0:00:03.361) 0:01:18.484 **** 2025-09-27 21:38:41.615658 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:38:41.615669 | orchestrator | 2025-09-27 21:38:41.615684 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-09-27 21:38:41.615695 | orchestrator | Saturday 27 September 2025 21:33:50 +0000 (0:00:01.082) 0:01:19.567 **** 2025-09-27 21:38:41.615708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-27 21:38:41.615726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.615737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.615749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-27 21:38:41.615767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.615784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.615796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-27 21:38:41.615817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.615829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.615840 | orchestrator | 2025-09-27 21:38:41.615851 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-09-27 21:38:41.615862 | orchestrator | Saturday 27 September 2025 21:33:54 +0000 (0:00:03.763) 0:01:23.330 **** 2025-09-27 21:38:41.615879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-27 21:38:41.615896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.615913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.615924 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.615936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-27 21:38:41.615948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.615959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.615970 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.615987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-27 21:38:41.616004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.616022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.616034 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.616045 | orchestrator | 2025-09-27 21:38:41.616055 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-09-27 21:38:41.616066 | orchestrator | Saturday 27 September 2025 21:33:55 +0000 (0:00:00.601) 0:01:23.931 **** 2025-09-27 21:38:41.616077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-27 21:38:41.616089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-27 21:38:41.616101 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.616112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-27 21:38:41.616123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-27 21:38:41.616134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-27 21:38:41.616145 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.616156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-27 21:38:41.616167 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.616226 | orchestrator | 2025-09-27 21:38:41.616238 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-09-27 21:38:41.616249 | orchestrator | Saturday 27 September 2025 21:33:55 +0000 (0:00:00.829) 0:01:24.761 **** 2025-09-27 21:38:41.616260 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.616271 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.616281 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.616292 | orchestrator | 2025-09-27 21:38:41.616303 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-09-27 21:38:41.616314 | orchestrator | Saturday 27 September 2025 21:33:57 +0000 (0:00:01.227) 0:01:25.988 **** 2025-09-27 21:38:41.616325 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.616336 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.616346 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.616357 | orchestrator | 2025-09-27 21:38:41.616381 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-09-27 21:38:41.616392 | orchestrator | Saturday 27 September 2025 21:33:58 +0000 (0:00:01.819) 0:01:27.808 **** 2025-09-27 21:38:41.616401 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.616411 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.616420 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.616430 | orchestrator | 2025-09-27 21:38:41.616439 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-09-27 21:38:41.616449 | orchestrator | Saturday 27 September 2025 21:33:59 +0000 (0:00:00.272) 0:01:28.081 **** 2025-09-27 21:38:41.616459 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:38:41.616468 | orchestrator | 2025-09-27 21:38:41.616477 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-09-27 21:38:41.616487 | orchestrator | Saturday 27 September 2025 21:33:59 +0000 (0:00:00.707) 0:01:28.788 **** 2025-09-27 21:38:41.616501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-27 21:38:41.616513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-27 21:38:41.616524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-27 21:38:41.616534 | orchestrator | 2025-09-27 21:38:41.616543 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-09-27 21:38:41.616553 | orchestrator | Saturday 27 September 2025 21:34:02 +0000 (0:00:02.220) 0:01:31.009 **** 2025-09-27 21:38:41.616568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-27 21:38:41.616584 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.616598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-27 21:38:41.616608 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.616618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-27 21:38:41.616628 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.616638 | orchestrator | 2025-09-27 21:38:41.616648 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-09-27 21:38:41.616657 | orchestrator | Saturday 27 September 2025 21:34:03 +0000 (0:00:01.221) 0:01:32.230 **** 2025-09-27 21:38:41.616669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-27 21:38:41.616679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-27 21:38:41.616690 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.616700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-27 21:38:41.616716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-27 21:38:41.616726 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.616741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-27 21:38:41.616752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-27 21:38:41.616761 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.616771 | orchestrator | 2025-09-27 21:38:41.616781 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-09-27 21:38:41.616790 | orchestrator | Saturday 27 September 2025 21:34:04 +0000 (0:00:01.630) 0:01:33.861 **** 2025-09-27 21:38:41.616800 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.616810 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.616819 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.616829 | orchestrator | 2025-09-27 21:38:41.616838 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-09-27 21:38:41.616848 | orchestrator | Saturday 27 September 2025 21:34:05 +0000 (0:00:00.721) 0:01:34.582 **** 2025-09-27 21:38:41.616858 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.616867 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.616877 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.616886 | orchestrator | 2025-09-27 21:38:41.616896 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-09-27 21:38:41.616905 | orchestrator | Saturday 27 September 2025 21:34:06 +0000 (0:00:01.188) 0:01:35.771 **** 2025-09-27 21:38:41.616915 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:38:41.616925 | orchestrator | 2025-09-27 21:38:41.616934 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-09-27 21:38:41.616944 | orchestrator | Saturday 27 September 2025 21:34:07 +0000 (0:00:00.721) 0:01:36.492 **** 2025-09-27 21:38:41.616954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-27 21:38:41.616964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.616993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.617010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.617025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-27 21:38:41.617036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-27 21:38:41.617046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.617062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.617077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.617091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.617102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.617112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.617132 | orchestrator | 2025-09-27 21:38:41.617142 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-09-27 21:38:41.617152 | orchestrator | Saturday 27 September 2025 21:34:10 +0000 (0:00:03.368) 0:01:39.861 **** 2025-09-27 21:38:41.617162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-27 21:38:41.617208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.617244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.617255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.617265 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.617276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-27 21:38:41.617292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.617303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.617319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.617329 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.617343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-27 21:38:41.617354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.617409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.617420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.617431 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.617441 | orchestrator | 2025-09-27 21:38:41.617451 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-09-27 21:38:41.617461 | orchestrator | Saturday 27 September 2025 21:34:11 +0000 (0:00:00.953) 0:01:40.815 **** 2025-09-27 21:38:41.617471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-27 21:38:41.617486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-27 21:38:41.617497 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.617507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-27 21:38:41.617517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-27 21:38:41.617527 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.617541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-27 21:38:41.617553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-27 21:38:41.617569 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.617586 | orchestrator | 2025-09-27 21:38:41.617602 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-09-27 21:38:41.617628 | orchestrator | Saturday 27 September 2025 21:34:12 +0000 (0:00:01.034) 0:01:41.849 **** 2025-09-27 21:38:41.617643 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.617653 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.617663 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.617673 | orchestrator | 2025-09-27 21:38:41.617682 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-09-27 21:38:41.617692 | orchestrator | Saturday 27 September 2025 21:34:14 +0000 (0:00:01.678) 0:01:43.528 **** 2025-09-27 21:38:41.617702 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.617711 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.617721 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.617731 | orchestrator | 2025-09-27 21:38:41.617739 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-09-27 21:38:41.617747 | orchestrator | Saturday 27 September 2025 21:34:17 +0000 (0:00:02.382) 0:01:45.911 **** 2025-09-27 21:38:41.617755 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.617763 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.617771 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.617779 | orchestrator | 2025-09-27 21:38:41.617787 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-09-27 21:38:41.617795 | orchestrator | Saturday 27 September 2025 21:34:17 +0000 (0:00:00.503) 0:01:46.415 **** 2025-09-27 21:38:41.617803 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.617811 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.617819 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.617826 | orchestrator | 2025-09-27 21:38:41.617834 | orchestrator | TASK [include_role : designate] ************************************************ 2025-09-27 21:38:41.617842 | orchestrator | Saturday 27 September 2025 21:34:17 +0000 (0:00:00.307) 0:01:46.722 **** 2025-09-27 21:38:41.617850 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:38:41.617858 | orchestrator | 2025-09-27 21:38:41.617866 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-09-27 21:38:41.617874 | orchestrator | Saturday 27 September 2025 21:34:18 +0000 (0:00:00.753) 0:01:47.476 **** 2025-09-27 21:38:41.617882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-27 21:38:41.617896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-27 21:38:41.617909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-27 21:38:41.617923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.617931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.617940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-27 21:38:41.617948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.617960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-27 21:38:41.617973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.617985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.617993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.618001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-27 21:38:41.618010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.618049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.618065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.618079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.618091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.618099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.618108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.618116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.618124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.618133 | orchestrator | 2025-09-27 21:38:41.618141 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-09-27 21:38:41.618154 | orchestrator | Saturday 27 September 2025 21:34:22 +0000 (0:00:03.593) 0:01:51.070 **** 2025-09-27 21:38:41.618171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-27 21:38:41.618196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-27 21:38:41.618205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.618214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.618223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.618231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.618252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-27 21:38:41.618264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.618272 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.618281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-27 21:38:41.618289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.618297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.618306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.618322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.618331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.618339 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.618351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-27 21:38:41.618359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-27 21:38:41.618367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.618376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.618388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.618401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.618413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.618421 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.618429 | orchestrator | 2025-09-27 21:38:41.618437 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-09-27 21:38:41.618446 | orchestrator | Saturday 27 September 2025 21:34:22 +0000 (0:00:00.792) 0:01:51.862 **** 2025-09-27 21:38:41.618454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-27 21:38:41.618462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-27 21:38:41.618471 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.618479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-27 21:38:41.618487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-27 21:38:41.618495 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.618503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-27 21:38:41.618511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-27 21:38:41.618519 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.618527 | orchestrator | 2025-09-27 21:38:41.618535 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-09-27 21:38:41.618543 | orchestrator | Saturday 27 September 2025 21:34:23 +0000 (0:00:00.996) 0:01:52.859 **** 2025-09-27 21:38:41.618555 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.618563 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.618571 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.618579 | orchestrator | 2025-09-27 21:38:41.618587 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-09-27 21:38:41.618595 | orchestrator | Saturday 27 September 2025 21:34:25 +0000 (0:00:01.333) 0:01:54.192 **** 2025-09-27 21:38:41.618602 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.618611 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.618619 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.618626 | orchestrator | 2025-09-27 21:38:41.618635 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-09-27 21:38:41.618643 | orchestrator | Saturday 27 September 2025 21:34:27 +0000 (0:00:02.101) 0:01:56.294 **** 2025-09-27 21:38:41.618650 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.618658 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.618666 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.618674 | orchestrator | 2025-09-27 21:38:41.618682 | orchestrator | TASK [include_role : glance] *************************************************** 2025-09-27 21:38:41.618690 | orchestrator | Saturday 27 September 2025 21:34:27 +0000 (0:00:00.485) 0:01:56.780 **** 2025-09-27 21:38:41.618698 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:38:41.618706 | orchestrator | 2025-09-27 21:38:41.618713 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-09-27 21:38:41.618721 | orchestrator | Saturday 27 September 2025 21:34:28 +0000 (0:00:00.778) 0:01:57.558 **** 2025-09-27 21:38:41.618741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-27 21:38:41.618752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-27 21:38:41.618774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-27 21:38:41.618785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-27 21:38:41.618802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-27 21:38:41.618815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-27 21:38:41.618828 | orchestrator | 2025-09-27 21:38:41.618836 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-09-27 21:38:41.618844 | orchestrator | Saturday 27 September 2025 21:34:33 +0000 (0:00:04.342) 0:02:01.901 **** 2025-09-27 21:38:41.618857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-27 21:38:41.618870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-27 21:38:41.618883 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.618892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-27 21:38:41.618913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-27 21:38:41.618923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-27 21:38:41.618945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-27 21:38:41.618955 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.618963 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.618971 | orchestrator | 2025-09-27 21:38:41.618979 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-09-27 21:38:41.618987 | orchestrator | Saturday 27 September 2025 21:34:36 +0000 (0:00:03.014) 0:02:04.915 **** 2025-09-27 21:38:41.618995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-27 21:38:41.619008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-27 21:38:41.619017 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.619025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-27 21:38:41.619034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-27 21:38:41.619042 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.619050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-27 21:38:41.619063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-27 21:38:41.619072 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.619080 | orchestrator | 2025-09-27 21:38:41.619088 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-09-27 21:38:41.619096 | orchestrator | Saturday 27 September 2025 21:34:39 +0000 (0:00:03.056) 0:02:07.971 **** 2025-09-27 21:38:41.619104 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.619112 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.619120 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.619127 | orchestrator | 2025-09-27 21:38:41.619135 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-09-27 21:38:41.619143 | orchestrator | Saturday 27 September 2025 21:34:40 +0000 (0:00:01.268) 0:02:09.240 **** 2025-09-27 21:38:41.619155 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.619167 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.619188 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.619196 | orchestrator | 2025-09-27 21:38:41.619204 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-09-27 21:38:41.619212 | orchestrator | Saturday 27 September 2025 21:34:42 +0000 (0:00:02.093) 0:02:11.334 **** 2025-09-27 21:38:41.619220 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.619228 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.619236 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.619244 | orchestrator | 2025-09-27 21:38:41.619252 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-09-27 21:38:41.619260 | orchestrator | Saturday 27 September 2025 21:34:42 +0000 (0:00:00.526) 0:02:11.860 **** 2025-09-27 21:38:41.619267 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:38:41.619275 | orchestrator | 2025-09-27 21:38:41.619283 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-09-27 21:38:41.619291 | orchestrator | Saturday 27 September 2025 21:34:43 +0000 (0:00:00.847) 0:02:12.707 **** 2025-09-27 21:38:41.619299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-27 21:38:41.619309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-27 21:38:41.619317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-27 21:38:41.619325 | orchestrator | 2025-09-27 21:38:41.619333 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-09-27 21:38:41.619341 | orchestrator | Saturday 27 September 2025 21:34:47 +0000 (0:00:03.227) 0:02:15.934 **** 2025-09-27 21:38:41.619355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-27 21:38:41.619374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-27 21:38:41.619383 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.619391 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.619399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-27 21:38:41.619407 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.619416 | orchestrator | 2025-09-27 21:38:41.619424 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-09-27 21:38:41.619432 | orchestrator | Saturday 27 September 2025 21:34:47 +0000 (0:00:00.572) 0:02:16.507 **** 2025-09-27 21:38:41.619440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-27 21:38:41.619448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-27 21:38:41.619456 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.619465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-27 21:38:41.619473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-27 21:38:41.619481 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.619488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-27 21:38:41.619497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-27 21:38:41.619504 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.619512 | orchestrator | 2025-09-27 21:38:41.619520 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-09-27 21:38:41.619529 | orchestrator | Saturday 27 September 2025 21:34:48 +0000 (0:00:00.624) 0:02:17.131 **** 2025-09-27 21:38:41.619537 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.619545 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.619553 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.619560 | orchestrator | 2025-09-27 21:38:41.619568 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-09-27 21:38:41.619581 | orchestrator | Saturday 27 September 2025 21:34:49 +0000 (0:00:01.267) 0:02:18.399 **** 2025-09-27 21:38:41.619589 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.619597 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.619605 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.619613 | orchestrator | 2025-09-27 21:38:41.619621 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-09-27 21:38:41.619629 | orchestrator | Saturday 27 September 2025 21:34:51 +0000 (0:00:02.056) 0:02:20.456 **** 2025-09-27 21:38:41.619637 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.619645 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.619658 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.619666 | orchestrator | 2025-09-27 21:38:41.619674 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-09-27 21:38:41.619682 | orchestrator | Saturday 27 September 2025 21:34:52 +0000 (0:00:00.489) 0:02:20.946 **** 2025-09-27 21:38:41.619690 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:38:41.619698 | orchestrator | 2025-09-27 21:38:41.619706 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-09-27 21:38:41.619714 | orchestrator | Saturday 27 September 2025 21:34:52 +0000 (0:00:00.862) 0:02:21.808 **** 2025-09-27 21:38:41.619727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-27 21:38:41.619742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-27 21:38:41.619768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-27 21:38:41.619777 | orchestrator | 2025-09-27 21:38:41.619785 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-09-27 21:38:41.619794 | orchestrator | Saturday 27 September 2025 21:34:57 +0000 (0:00:04.299) 0:02:26.108 **** 2025-09-27 21:38:41.619816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-27 21:38:41.619826 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.619835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-27 21:38:41.619848 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.619869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-27 21:38:41.619879 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.619887 | orchestrator | 2025-09-27 21:38:41.619895 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-09-27 21:38:41.619903 | orchestrator | Saturday 27 September 2025 21:34:58 +0000 (0:00:01.124) 0:02:27.233 **** 2025-09-27 21:38:41.619911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-27 21:38:41.619920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-27 21:38:41.619929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-27 21:38:41.619943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-27 21:38:41.619951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-27 21:38:41.619959 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.619968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-27 21:38:41.619976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-27 21:38:41.619985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-27 21:38:41.620149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-27 21:38:41.620166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-27 21:38:41.620215 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.620232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-27 21:38:41.620241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-27 21:38:41.620249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-27 21:38:41.620258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-27 21:38:41.620266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-27 21:38:41.620274 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.620283 | orchestrator | 2025-09-27 21:38:41.620291 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-09-27 21:38:41.620306 | orchestrator | Saturday 27 September 2025 21:34:59 +0000 (0:00:01.054) 0:02:28.287 **** 2025-09-27 21:38:41.620315 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.620322 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.620330 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.620338 | orchestrator | 2025-09-27 21:38:41.620346 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-09-27 21:38:41.620354 | orchestrator | Saturday 27 September 2025 21:35:00 +0000 (0:00:01.459) 0:02:29.746 **** 2025-09-27 21:38:41.620362 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.620370 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.620378 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.620385 | orchestrator | 2025-09-27 21:38:41.620394 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-09-27 21:38:41.620402 | orchestrator | Saturday 27 September 2025 21:35:02 +0000 (0:00:02.125) 0:02:31.871 **** 2025-09-27 21:38:41.620410 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.620418 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.620426 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.620434 | orchestrator | 2025-09-27 21:38:41.620442 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-09-27 21:38:41.620450 | orchestrator | Saturday 27 September 2025 21:35:03 +0000 (0:00:00.305) 0:02:32.177 **** 2025-09-27 21:38:41.620458 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.620466 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.620473 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.620481 | orchestrator | 2025-09-27 21:38:41.620489 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-09-27 21:38:41.620497 | orchestrator | Saturday 27 September 2025 21:35:03 +0000 (0:00:00.508) 0:02:32.686 **** 2025-09-27 21:38:41.620505 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:38:41.620513 | orchestrator | 2025-09-27 21:38:41.620520 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-09-27 21:38:41.620528 | orchestrator | Saturday 27 September 2025 21:35:04 +0000 (0:00:00.991) 0:02:33.677 **** 2025-09-27 21:38:41.620545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 21:38:41.620560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 21:38:41.620575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-27 21:38:41.620585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-27 21:38:41.620593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-27 21:38:41.620602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-27 21:38:41.620619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 21:38:41.620629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-27 21:38:41.620641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-27 21:38:41.620650 | orchestrator | 2025-09-27 21:38:41.620658 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-09-27 21:38:41.620666 | orchestrator | Saturday 27 September 2025 21:35:07 +0000 (0:00:03.090) 0:02:36.768 **** 2025-09-27 21:38:41.620675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-27 21:38:41.620684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-27 21:38:41.620698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-27 21:38:41.620706 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.620718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-27 21:38:41.620732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-27 21:38:41.620743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-27 21:38:41.620751 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.620759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-27 21:38:41.620771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-27 21:38:41.620782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-27 21:38:41.620793 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.620801 | orchestrator | 2025-09-27 21:38:41.620808 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-09-27 21:38:41.620815 | orchestrator | Saturday 27 September 2025 21:35:08 +0000 (0:00:00.707) 0:02:37.475 **** 2025-09-27 21:38:41.620823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-27 21:38:41.620832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-27 21:38:41.620839 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.620847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-27 21:38:41.620856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-27 21:38:41.620864 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.620871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-27 21:38:41.620878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-27 21:38:41.620885 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.620891 | orchestrator | 2025-09-27 21:38:41.620898 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-09-27 21:38:41.620906 | orchestrator | Saturday 27 September 2025 21:35:09 +0000 (0:00:00.739) 0:02:38.215 **** 2025-09-27 21:38:41.620912 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.620919 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.620926 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.620933 | orchestrator | 2025-09-27 21:38:41.620939 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-09-27 21:38:41.620946 | orchestrator | Saturday 27 September 2025 21:35:10 +0000 (0:00:01.407) 0:02:39.623 **** 2025-09-27 21:38:41.620952 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.620959 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.620966 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.620973 | orchestrator | 2025-09-27 21:38:41.620979 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-09-27 21:38:41.620986 | orchestrator | Saturday 27 September 2025 21:35:12 +0000 (0:00:02.211) 0:02:41.834 **** 2025-09-27 21:38:41.620993 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.621000 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.621006 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.621013 | orchestrator | 2025-09-27 21:38:41.621019 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-09-27 21:38:41.621030 | orchestrator | Saturday 27 September 2025 21:35:13 +0000 (0:00:00.509) 0:02:42.344 **** 2025-09-27 21:38:41.621037 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:38:41.621044 | orchestrator | 2025-09-27 21:38:41.621051 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-09-27 21:38:41.621058 | orchestrator | Saturday 27 September 2025 21:35:14 +0000 (0:00:00.960) 0:02:43.304 **** 2025-09-27 21:38:41.621072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 21:38:41.621081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.621089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 21:38:41.621096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.621104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 21:38:41.621119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.621127 | orchestrator | 2025-09-27 21:38:41.621139 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-09-27 21:38:41.621146 | orchestrator | Saturday 27 September 2025 21:35:18 +0000 (0:00:03.648) 0:02:46.953 **** 2025-09-27 21:38:41.621154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-27 21:38:41.621161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.621168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-27 21:38:41.621192 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.621204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.621211 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.621221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-27 21:38:41.621229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.621236 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.621242 | orchestrator | 2025-09-27 21:38:41.621249 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-09-27 21:38:41.621256 | orchestrator | Saturday 27 September 2025 21:35:19 +0000 (0:00:01.079) 0:02:48.033 **** 2025-09-27 21:38:41.621263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-27 21:38:41.621270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-27 21:38:41.621277 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.621284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-27 21:38:41.621291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-27 21:38:41.621302 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.621309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-27 21:38:41.621316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-27 21:38:41.621323 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.621330 | orchestrator | 2025-09-27 21:38:41.621336 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-09-27 21:38:41.621343 | orchestrator | Saturday 27 September 2025 21:35:20 +0000 (0:00:00.890) 0:02:48.923 **** 2025-09-27 21:38:41.621350 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.621357 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.621364 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.621370 | orchestrator | 2025-09-27 21:38:41.621377 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-09-27 21:38:41.621385 | orchestrator | Saturday 27 September 2025 21:35:21 +0000 (0:00:01.248) 0:02:50.172 **** 2025-09-27 21:38:41.621391 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.621398 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.621405 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.621428 | orchestrator | 2025-09-27 21:38:41.621435 | orchestrator | TASK [include_role : manila] *************************************************** 2025-09-27 21:38:41.621442 | orchestrator | Saturday 27 September 2025 21:35:23 +0000 (0:00:02.069) 0:02:52.241 **** 2025-09-27 21:38:41.621452 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:38:41.621460 | orchestrator | 2025-09-27 21:38:41.621467 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-09-27 21:38:41.621473 | orchestrator | Saturday 27 September 2025 21:35:24 +0000 (0:00:01.212) 0:02:53.454 **** 2025-09-27 21:38:41.621483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-27 21:38:41.621491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.621499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.621510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.621525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-27 21:38:41.621536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.621554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.621562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.621569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-27 21:38:41.621581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.621588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.621598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.621606 | orchestrator | 2025-09-27 21:38:41.621613 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-09-27 21:38:41.621620 | orchestrator | Saturday 27 September 2025 21:35:28 +0000 (0:00:03.995) 0:02:57.449 **** 2025-09-27 21:38:41.621629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-27 21:38:41.621644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.621663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.621670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.621677 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.621684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-27 21:38:41.621695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.621705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.621713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.621723 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.621730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-27 21:38:41.621737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.621744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.621755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.621763 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.621769 | orchestrator | 2025-09-27 21:38:41.621776 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-09-27 21:38:41.621783 | orchestrator | Saturday 27 September 2025 21:35:29 +0000 (0:00:00.746) 0:02:58.196 **** 2025-09-27 21:38:41.621790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-27 21:38:41.621797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-27 21:38:41.621804 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.621813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-27 21:38:41.621820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-27 21:38:41.621831 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.621838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-27 21:38:41.621844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-27 21:38:41.621851 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.621858 | orchestrator | 2025-09-27 21:38:41.621864 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-09-27 21:38:41.621871 | orchestrator | Saturday 27 September 2025 21:35:30 +0000 (0:00:01.615) 0:02:59.812 **** 2025-09-27 21:38:41.621878 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.621884 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.621891 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.621897 | orchestrator | 2025-09-27 21:38:41.621904 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-09-27 21:38:41.621910 | orchestrator | Saturday 27 September 2025 21:35:32 +0000 (0:00:01.392) 0:03:01.204 **** 2025-09-27 21:38:41.621917 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.621924 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.621930 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.621937 | orchestrator | 2025-09-27 21:38:41.621944 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-09-27 21:38:41.621950 | orchestrator | Saturday 27 September 2025 21:35:34 +0000 (0:00:02.028) 0:03:03.233 **** 2025-09-27 21:38:41.621957 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:38:41.621964 | orchestrator | 2025-09-27 21:38:41.621970 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-09-27 21:38:41.621977 | orchestrator | Saturday 27 September 2025 21:35:35 +0000 (0:00:01.306) 0:03:04.540 **** 2025-09-27 21:38:41.621984 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-27 21:38:41.621990 | orchestrator | 2025-09-27 21:38:41.621997 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-09-27 21:38:41.622004 | orchestrator | Saturday 27 September 2025 21:35:38 +0000 (0:00:03.003) 0:03:07.543 **** 2025-09-27 21:38:41.622038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-27 21:38:41.622058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-27 21:38:41.622066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-27 21:38:41.622074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-27 21:38:41.622081 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.622088 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.622121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-27 21:38:41.622133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-27 21:38:41.622140 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.622147 | orchestrator | 2025-09-27 21:38:41.622154 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-09-27 21:38:41.622160 | orchestrator | Saturday 27 September 2025 21:35:40 +0000 (0:00:02.123) 0:03:09.667 **** 2025-09-27 21:38:41.622168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-27 21:38:41.622200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-27 21:38:41.622208 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.622218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-27 21:38:41.622226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-27 21:38:41.622233 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.622245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-27 21:38:41.622260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-27 21:38:41.622267 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.622273 | orchestrator | 2025-09-27 21:38:41.622280 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-09-27 21:38:41.622287 | orchestrator | Saturday 27 September 2025 21:35:42 +0000 (0:00:02.166) 0:03:11.834 **** 2025-09-27 21:38:41.622294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-27 21:38:41.622302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-27 21:38:41.622308 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.622316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-27 21:38:41.622323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-27 21:38:41.622334 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.622345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-27 21:38:41.622355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-27 21:38:41.622362 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.622369 | orchestrator | 2025-09-27 21:38:41.622376 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-09-27 21:38:41.622383 | orchestrator | Saturday 27 September 2025 21:35:45 +0000 (0:00:02.633) 0:03:14.467 **** 2025-09-27 21:38:41.622389 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.622396 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.622403 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.622409 | orchestrator | 2025-09-27 21:38:41.622416 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-09-27 21:38:41.622423 | orchestrator | Saturday 27 September 2025 21:35:47 +0000 (0:00:01.818) 0:03:16.285 **** 2025-09-27 21:38:41.622430 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.622437 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.622443 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.622450 | orchestrator | 2025-09-27 21:38:41.622457 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-09-27 21:38:41.622463 | orchestrator | Saturday 27 September 2025 21:35:48 +0000 (0:00:01.340) 0:03:17.626 **** 2025-09-27 21:38:41.622470 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.622477 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.622484 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.622490 | orchestrator | 2025-09-27 21:38:41.622497 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-09-27 21:38:41.622512 | orchestrator | Saturday 27 September 2025 21:35:49 +0000 (0:00:00.287) 0:03:17.914 **** 2025-09-27 21:38:41.622519 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:38:41.622526 | orchestrator | 2025-09-27 21:38:41.622532 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-09-27 21:38:41.622539 | orchestrator | Saturday 27 September 2025 21:35:50 +0000 (0:00:01.316) 0:03:19.230 **** 2025-09-27 21:38:41.622546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-27 21:38:41.622558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-27 21:38:41.622570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-27 21:38:41.622578 | orchestrator | 2025-09-27 21:38:41.622585 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-09-27 21:38:41.622592 | orchestrator | Saturday 27 September 2025 21:35:51 +0000 (0:00:01.531) 0:03:20.761 **** 2025-09-27 21:38:41.622602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-27 21:38:41.622609 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.622616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-27 21:38:41.622623 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.622630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-27 21:38:41.622641 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.622647 | orchestrator | 2025-09-27 21:38:41.622655 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-09-27 21:38:41.622661 | orchestrator | Saturday 27 September 2025 21:35:52 +0000 (0:00:00.375) 0:03:21.136 **** 2025-09-27 21:38:41.622668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-27 21:38:41.622676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-27 21:38:41.622683 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.622689 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.622699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-27 21:38:41.622707 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.622713 | orchestrator | 2025-09-27 21:38:41.622720 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-09-27 21:38:41.622727 | orchestrator | Saturday 27 September 2025 21:35:53 +0000 (0:00:00.806) 0:03:21.943 **** 2025-09-27 21:38:41.622733 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.622740 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.622747 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.622753 | orchestrator | 2025-09-27 21:38:41.622760 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-09-27 21:38:41.622766 | orchestrator | Saturday 27 September 2025 21:35:53 +0000 (0:00:00.444) 0:03:22.388 **** 2025-09-27 21:38:41.622773 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.622780 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.622786 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.622793 | orchestrator | 2025-09-27 21:38:41.622800 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-09-27 21:38:41.622807 | orchestrator | Saturday 27 September 2025 21:35:54 +0000 (0:00:01.290) 0:03:23.678 **** 2025-09-27 21:38:41.622816 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.622823 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.622829 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.622836 | orchestrator | 2025-09-27 21:38:41.622843 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-09-27 21:38:41.622849 | orchestrator | Saturday 27 September 2025 21:35:55 +0000 (0:00:00.307) 0:03:23.986 **** 2025-09-27 21:38:41.622856 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:38:41.622863 | orchestrator | 2025-09-27 21:38:41.622869 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-09-27 21:38:41.622876 | orchestrator | Saturday 27 September 2025 21:35:56 +0000 (0:00:01.341) 0:03:25.327 **** 2025-09-27 21:38:41.622887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 21:38:41.622895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.622902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.622914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.622924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-27 21:38:41.622931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 21:38:41.622944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.622952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.622959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-27 21:38:41.622970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.622978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-27 21:38:41.622989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.622997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.623004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-27 21:38:41.623011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.623036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 21:38:41.623046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.623057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-27 21:38:41.623065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-27 21:38:41.623072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-27 21:38:41.623079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-27 21:38:41.623086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.623097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.623108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 21:38:41.623120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-27 21:38:41.623127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 21:38:41.623134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-27 21:38:41.623145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.623152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-27 21:38:41.623166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.623201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-27 21:38:41.623210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.623218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.623225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.623241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-27 21:38:41.623253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-27 21:38:41.623260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-27 21:38:41.623268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.623275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-27 21:38:41.623283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-27 21:38:41.623294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.623308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 21:38:41.623315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.623322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-27 21:38:41.623329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-27 21:38:41.623336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.623347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-27 21:38:41.623362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-27 21:38:41.623369 | orchestrator | 2025-09-27 21:38:41.623376 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-09-27 21:38:41.623383 | orchestrator | Saturday 27 September 2025 21:36:00 +0000 (0:00:04.148) 0:03:29.475 **** 2025-09-27 21:38:41.623390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 21:38:41.623397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.623404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.623415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.623432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-27 21:38:41.623439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.623446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-27 21:38:41.623454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 21:38:41.623461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-27 21:38:41.623471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.623485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.623492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.623499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 21:38:41.623506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.623513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.623527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/k2025-09-27 21:38:41 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:38:41.623536 | orchestrator | 2025-09-27 21:38:41 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:38:41.623543 | orchestrator | 2025-09-27 21:38:41 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:38:41.623550 | orchestrator | 2025-09-27 21:38:41 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:38:41.623671 | orchestrator | olla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-27 21:38:41.623683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 21:38:41.623691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-27 21:38:41.623698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.623705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.623717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-27 21:38:41.623732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.623739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-27 21:38:41.623746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.623752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.623759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-27 21:38:41.623769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-27 21:38:41.623779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.623788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-27 21:38:41.623795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-27 21:38:41.623801 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.623808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 21:38:41.623818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.623825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.623834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-27 21:38:41.623844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-27 21:38:41.623851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-27 21:38:41.623857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.623864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-27 21:38:41.623875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 21:38:41.623882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.623891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.623901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-27 21:38:41.623908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-27 21:38:41.623914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-27 21:38:41.623924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-27 21:38:41.623931 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.623937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.623952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-27 21:38:41.623959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-27 21:38:41.623965 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.623971 | orchestrator | 2025-09-27 21:38:41.623977 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-09-27 21:38:41.623984 | orchestrator | Saturday 27 September 2025 21:36:02 +0000 (0:00:01.408) 0:03:30.884 **** 2025-09-27 21:38:41.623990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-27 21:38:41.623997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-27 21:38:41.624008 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.624014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-27 21:38:41.624020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-27 21:38:41.624027 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.624033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-27 21:38:41.624039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-27 21:38:41.624045 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.624051 | orchestrator | 2025-09-27 21:38:41.624058 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-09-27 21:38:41.624064 | orchestrator | Saturday 27 September 2025 21:36:03 +0000 (0:00:01.862) 0:03:32.746 **** 2025-09-27 21:38:41.624070 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.624076 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.624083 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.624089 | orchestrator | 2025-09-27 21:38:41.624095 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-09-27 21:38:41.624101 | orchestrator | Saturday 27 September 2025 21:36:05 +0000 (0:00:01.314) 0:03:34.061 **** 2025-09-27 21:38:41.624107 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.624113 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.624120 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.624126 | orchestrator | 2025-09-27 21:38:41.624132 | orchestrator | TASK [include_role : placement] ************************************************ 2025-09-27 21:38:41.624138 | orchestrator | Saturday 27 September 2025 21:36:07 +0000 (0:00:02.028) 0:03:36.089 **** 2025-09-27 21:38:41.624144 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:38:41.624150 | orchestrator | 2025-09-27 21:38:41.624157 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-09-27 21:38:41.624163 | orchestrator | Saturday 27 September 2025 21:36:08 +0000 (0:00:01.157) 0:03:37.247 **** 2025-09-27 21:38:41.624187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-27 21:38:41.624195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-27 21:38:41.624208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-27 21:38:41.624214 | orchestrator | 2025-09-27 21:38:41.624220 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-09-27 21:38:41.624227 | orchestrator | Saturday 27 September 2025 21:36:11 +0000 (0:00:03.512) 0:03:40.759 **** 2025-09-27 21:38:41.624233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-27 21:38:41.624239 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.624249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-27 21:38:41.624256 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.624265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-27 21:38:41.624276 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.624283 | orchestrator | 2025-09-27 21:38:41.624289 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-09-27 21:38:41.624296 | orchestrator | Saturday 27 September 2025 21:36:12 +0000 (0:00:00.529) 0:03:41.288 **** 2025-09-27 21:38:41.624303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-27 21:38:41.624311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-27 21:38:41.624318 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.624325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-27 21:38:41.624332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-27 21:38:41.624338 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.624346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-27 21:38:41.624353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-27 21:38:41.624360 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.624367 | orchestrator | 2025-09-27 21:38:41.624374 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-09-27 21:38:41.624381 | orchestrator | Saturday 27 September 2025 21:36:13 +0000 (0:00:00.736) 0:03:42.025 **** 2025-09-27 21:38:41.624387 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.624394 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.624401 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.624407 | orchestrator | 2025-09-27 21:38:41.624414 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-09-27 21:38:41.624421 | orchestrator | Saturday 27 September 2025 21:36:14 +0000 (0:00:01.310) 0:03:43.336 **** 2025-09-27 21:38:41.624428 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.624435 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.624441 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.624448 | orchestrator | 2025-09-27 21:38:41.624455 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-09-27 21:38:41.624462 | orchestrator | Saturday 27 September 2025 21:36:16 +0000 (0:00:02.135) 0:03:45.471 **** 2025-09-27 21:38:41.624468 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:38:41.624475 | orchestrator | 2025-09-27 21:38:41.624482 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-09-27 21:38:41.624489 | orchestrator | Saturday 27 September 2025 21:36:18 +0000 (0:00:01.433) 0:03:46.905 **** 2025-09-27 21:38:41.624503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-27 21:38:41.624516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-27 21:38:41.624524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.624532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.624539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.624553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.624565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-27 21:38:41.624573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.624580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.624587 | orchestrator | 2025-09-27 21:38:41.624593 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-09-27 21:38:41.624600 | orchestrator | Saturday 27 September 2025 21:36:22 +0000 (0:00:03.977) 0:03:50.883 **** 2025-09-27 21:38:41.624611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-27 21:38:41.624625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.624633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.624639 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.624646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-27 21:38:41.624653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.624659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.624669 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.624682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-27 21:38:41.624689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.624695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.624702 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.624708 | orchestrator | 2025-09-27 21:38:41.624714 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-09-27 21:38:41.624720 | orchestrator | Saturday 27 September 2025 21:36:22 +0000 (0:00:00.905) 0:03:51.788 **** 2025-09-27 21:38:41.624727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-27 21:38:41.624733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-27 21:38:41.624740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-27 21:38:41.624746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-27 21:38:41.624756 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.624763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-27 21:38:41.624769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-27 21:38:41.624775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-27 21:38:41.624782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-27 21:38:41.624790 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.624797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-27 21:38:41.624808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-27 21:38:41.624814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-27 21:38:41.624820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-27 21:38:41.624827 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.624833 | orchestrator | 2025-09-27 21:38:41.624839 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-09-27 21:38:41.624845 | orchestrator | Saturday 27 September 2025 21:36:24 +0000 (0:00:01.212) 0:03:53.001 **** 2025-09-27 21:38:41.624851 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.624858 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.624864 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.624870 | orchestrator | 2025-09-27 21:38:41.624876 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-09-27 21:38:41.624882 | orchestrator | Saturday 27 September 2025 21:36:25 +0000 (0:00:01.386) 0:03:54.387 **** 2025-09-27 21:38:41.624888 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.624894 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.624900 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.624906 | orchestrator | 2025-09-27 21:38:41.624912 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-09-27 21:38:41.624919 | orchestrator | Saturday 27 September 2025 21:36:27 +0000 (0:00:02.043) 0:03:56.430 **** 2025-09-27 21:38:41.624925 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:38:41.624931 | orchestrator | 2025-09-27 21:38:41.624937 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-09-27 21:38:41.624943 | orchestrator | Saturday 27 September 2025 21:36:29 +0000 (0:00:01.467) 0:03:57.898 **** 2025-09-27 21:38:41.624949 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-09-27 21:38:41.624959 | orchestrator | 2025-09-27 21:38:41.624966 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-09-27 21:38:41.624972 | orchestrator | Saturday 27 September 2025 21:36:29 +0000 (0:00:00.821) 0:03:58.720 **** 2025-09-27 21:38:41.624978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-27 21:38:41.624985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-27 21:38:41.624992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-27 21:38:41.624998 | orchestrator | 2025-09-27 21:38:41.625004 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-09-27 21:38:41.625011 | orchestrator | Saturday 27 September 2025 21:36:34 +0000 (0:00:04.398) 0:04:03.118 **** 2025-09-27 21:38:41.625020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-27 21:38:41.625029 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.625036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-27 21:38:41.625042 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.625049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-27 21:38:41.625055 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.625061 | orchestrator | 2025-09-27 21:38:41.625067 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-09-27 21:38:41.625077 | orchestrator | Saturday 27 September 2025 21:36:35 +0000 (0:00:01.364) 0:04:04.483 **** 2025-09-27 21:38:41.625083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-27 21:38:41.625090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-27 21:38:41.625096 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.625102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-27 21:38:41.625109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-27 21:38:41.625115 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.625121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-27 21:38:41.625128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-27 21:38:41.625134 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.625140 | orchestrator | 2025-09-27 21:38:41.625146 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-27 21:38:41.625153 | orchestrator | Saturday 27 September 2025 21:36:37 +0000 (0:00:01.496) 0:04:05.979 **** 2025-09-27 21:38:41.625159 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.625165 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.625171 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.625188 | orchestrator | 2025-09-27 21:38:41.625194 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-27 21:38:41.625201 | orchestrator | Saturday 27 September 2025 21:36:39 +0000 (0:00:02.402) 0:04:08.381 **** 2025-09-27 21:38:41.625207 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.625213 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.625219 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.625225 | orchestrator | 2025-09-27 21:38:41.625231 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-09-27 21:38:41.625237 | orchestrator | Saturday 27 September 2025 21:36:42 +0000 (0:00:02.969) 0:04:11.351 **** 2025-09-27 21:38:41.625246 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-09-27 21:38:41.625252 | orchestrator | 2025-09-27 21:38:41.625259 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-09-27 21:38:41.625265 | orchestrator | Saturday 27 September 2025 21:36:43 +0000 (0:00:01.366) 0:04:12.717 **** 2025-09-27 21:38:41.625274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-27 21:38:41.625284 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.625291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-27 21:38:41.625297 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.625303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-27 21:38:41.625310 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.625316 | orchestrator | 2025-09-27 21:38:41.625322 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-09-27 21:38:41.625329 | orchestrator | Saturday 27 September 2025 21:36:45 +0000 (0:00:01.254) 0:04:13.972 **** 2025-09-27 21:38:41.625335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-27 21:38:41.625341 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.625348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-27 21:38:41.625354 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.625360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-27 21:38:41.625367 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.625373 | orchestrator | 2025-09-27 21:38:41.625379 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-09-27 21:38:41.625385 | orchestrator | Saturday 27 September 2025 21:36:46 +0000 (0:00:01.297) 0:04:15.269 **** 2025-09-27 21:38:41.625391 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.625397 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.625403 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.625410 | orchestrator | 2025-09-27 21:38:41.625419 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-27 21:38:41.625429 | orchestrator | Saturday 27 September 2025 21:36:48 +0000 (0:00:01.732) 0:04:17.002 **** 2025-09-27 21:38:41.625440 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:38:41.625451 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:38:41.625461 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:38:41.625472 | orchestrator | 2025-09-27 21:38:41.625483 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-27 21:38:41.625498 | orchestrator | Saturday 27 September 2025 21:36:50 +0000 (0:00:02.353) 0:04:19.355 **** 2025-09-27 21:38:41.625509 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:38:41.625515 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:38:41.625521 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:38:41.625527 | orchestrator | 2025-09-27 21:38:41.625534 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-09-27 21:38:41.625540 | orchestrator | Saturday 27 September 2025 21:36:53 +0000 (0:00:02.868) 0:04:22.223 **** 2025-09-27 21:38:41.625546 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-09-27 21:38:41.625552 | orchestrator | 2025-09-27 21:38:41.625558 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-09-27 21:38:41.625565 | orchestrator | Saturday 27 September 2025 21:36:54 +0000 (0:00:00.834) 0:04:23.058 **** 2025-09-27 21:38:41.625571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-27 21:38:41.625578 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.625584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-27 21:38:41.625591 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.625597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-27 21:38:41.625603 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.625609 | orchestrator | 2025-09-27 21:38:41.625616 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-09-27 21:38:41.625622 | orchestrator | Saturday 27 September 2025 21:36:55 +0000 (0:00:01.300) 0:04:24.358 **** 2025-09-27 21:38:41.625628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-27 21:38:41.625639 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.625645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-27 21:38:41.625652 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.625755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-27 21:38:41.625766 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.625773 | orchestrator | 2025-09-27 21:38:41.625779 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-09-27 21:38:41.625785 | orchestrator | Saturday 27 September 2025 21:36:56 +0000 (0:00:01.339) 0:04:25.698 **** 2025-09-27 21:38:41.625791 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.625798 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.625804 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.625810 | orchestrator | 2025-09-27 21:38:41.625816 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-27 21:38:41.625822 | orchestrator | Saturday 27 September 2025 21:36:58 +0000 (0:00:01.490) 0:04:27.188 **** 2025-09-27 21:38:41.625828 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:38:41.625834 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:38:41.625841 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:38:41.625847 | orchestrator | 2025-09-27 21:38:41.625853 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-27 21:38:41.625859 | orchestrator | Saturday 27 September 2025 21:37:00 +0000 (0:00:02.419) 0:04:29.608 **** 2025-09-27 21:38:41.625865 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:38:41.625872 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:38:41.625878 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:38:41.625884 | orchestrator | 2025-09-27 21:38:41.625890 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-09-27 21:38:41.625896 | orchestrator | Saturday 27 September 2025 21:37:03 +0000 (0:00:03.243) 0:04:32.851 **** 2025-09-27 21:38:41.625902 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:38:41.625909 | orchestrator | 2025-09-27 21:38:41.625915 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-09-27 21:38:41.625921 | orchestrator | Saturday 27 September 2025 21:37:05 +0000 (0:00:01.559) 0:04:34.411 **** 2025-09-27 21:38:41.625928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-27 21:38:41.625939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-27 21:38:41.625960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-27 21:38:41.625971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-27 21:38:41.625978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-27 21:38:41.625984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-27 21:38:41.625991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-27 21:38:41.626001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-27 21:38:41.626008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.626049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.626061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-27 21:38:41.626068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-27 21:38:41.626074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-27 21:38:41.626085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-27 21:38:41.626091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.626098 | orchestrator | 2025-09-27 21:38:41.626104 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-09-27 21:38:41.626111 | orchestrator | Saturday 27 September 2025 21:37:08 +0000 (0:00:03.385) 0:04:37.797 **** 2025-09-27 21:38:41.626132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-27 21:38:41.626139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-27 21:38:41.626146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-27 21:38:41.626165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-27 21:38:41.626191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.626198 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.626205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-27 21:38:41.626226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-27 21:38:41.626237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-27 21:38:41.626243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-27 21:38:41.626250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.626261 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.626267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-27 21:38:41.626274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-27 21:38:41.626280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-27 21:38:41.626304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-27 21:38:41.626311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-27 21:38:41.626318 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.626324 | orchestrator | 2025-09-27 21:38:41.626330 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-09-27 21:38:41.626336 | orchestrator | Saturday 27 September 2025 21:37:09 +0000 (0:00:00.687) 0:04:38.484 **** 2025-09-27 21:38:41.626343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-27 21:38:41.626353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-27 21:38:41.626361 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.626368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-27 21:38:41.626374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-27 21:38:41.626382 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.626389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-27 21:38:41.626395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-27 21:38:41.626402 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.626409 | orchestrator | 2025-09-27 21:38:41.626416 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-09-27 21:38:41.626423 | orchestrator | Saturday 27 September 2025 21:37:11 +0000 (0:00:01.399) 0:04:39.884 **** 2025-09-27 21:38:41.626430 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.626437 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.626444 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.626450 | orchestrator | 2025-09-27 21:38:41.626456 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-09-27 21:38:41.626462 | orchestrator | Saturday 27 September 2025 21:37:12 +0000 (0:00:01.451) 0:04:41.336 **** 2025-09-27 21:38:41.626468 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.626474 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.626480 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.626486 | orchestrator | 2025-09-27 21:38:41.626493 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-09-27 21:38:41.626499 | orchestrator | Saturday 27 September 2025 21:37:14 +0000 (0:00:02.197) 0:04:43.533 **** 2025-09-27 21:38:41.626505 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:38:41.626511 | orchestrator | 2025-09-27 21:38:41.626517 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-09-27 21:38:41.626523 | orchestrator | Saturday 27 September 2025 21:37:16 +0000 (0:00:01.346) 0:04:44.880 **** 2025-09-27 21:38:41.626543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-27 21:38:41.626554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-27 21:38:41.626565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-27 21:38:41.626572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-27 21:38:41.626594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-27 21:38:41.626605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-27 21:38:41.626616 | orchestrator | 2025-09-27 21:38:41.626623 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-09-27 21:38:41.626629 | orchestrator | Saturday 27 September 2025 21:37:21 +0000 (0:00:05.315) 0:04:50.196 **** 2025-09-27 21:38:41.626636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-27 21:38:41.626643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-27 21:38:41.626650 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.626656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-27 21:38:41.626680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-27 21:38:41.626692 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.626699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-27 21:38:41.626705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-27 21:38:41.626712 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.626718 | orchestrator | 2025-09-27 21:38:41.626724 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-09-27 21:38:41.626730 | orchestrator | Saturday 27 September 2025 21:37:21 +0000 (0:00:00.623) 0:04:50.820 **** 2025-09-27 21:38:41.626737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-27 21:38:41.626743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-27 21:38:41.626750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-27 21:38:41.626756 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.626762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-27 21:38:41.626787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-27 21:38:41.626797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-27 21:38:41.626803 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.626810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-27 21:38:41.626816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-27 21:38:41.626823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-27 21:38:41.626829 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.626835 | orchestrator | 2025-09-27 21:38:41.626841 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-09-27 21:38:41.626847 | orchestrator | Saturday 27 September 2025 21:37:22 +0000 (0:00:00.903) 0:04:51.723 **** 2025-09-27 21:38:41.626854 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.626860 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.626866 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.626872 | orchestrator | 2025-09-27 21:38:41.626878 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-09-27 21:38:41.626884 | orchestrator | Saturday 27 September 2025 21:37:23 +0000 (0:00:00.760) 0:04:52.484 **** 2025-09-27 21:38:41.626890 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.626897 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.626903 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.626909 | orchestrator | 2025-09-27 21:38:41.626915 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-09-27 21:38:41.626921 | orchestrator | Saturday 27 September 2025 21:37:24 +0000 (0:00:01.275) 0:04:53.759 **** 2025-09-27 21:38:41.626927 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:38:41.626933 | orchestrator | 2025-09-27 21:38:41.626940 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-09-27 21:38:41.626946 | orchestrator | Saturday 27 September 2025 21:37:26 +0000 (0:00:01.437) 0:04:55.197 **** 2025-09-27 21:38:41.626952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-27 21:38:41.626959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 21:38:41.626970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:38:41.626994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:38:41.627002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 21:38:41.627009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-27 21:38:41.627015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 21:38:41.627022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-27 21:38:41.627032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:38:41.627039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 21:38:41.627062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:38:41.627070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:38:41.627076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 21:38:41.627083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:38:41.627089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 21:38:41.627096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-27 21:38:41.627110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-27 21:38:41.627119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:38:41.627126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:38:41.627132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-27 21:38:41.627139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-27 21:38:41.627149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-27 21:38:41.627159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:38:41.627169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:38:41.627186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-27 21:38:41.627193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-27 21:38:41.627200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-27 21:38:41.627210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:38:41.627217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:38:41.627226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-27 21:38:41.627233 | orchestrator | 2025-09-27 21:38:41.627239 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-09-27 21:38:41.627248 | orchestrator | Saturday 27 September 2025 21:37:30 +0000 (0:00:04.320) 0:04:59.517 **** 2025-09-27 21:38:41.627255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-27 21:38:41.627261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 21:38:41.627268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:38:41.627278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:38:41.627285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 21:38:41.627294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-27 21:38:41.627305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-27 21:38:41.627312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:38:41.627318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:38:41.627328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-27 21:38:41.627334 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.627341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-27 21:38:41.627347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 21:38:41.627357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:38:41.627366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:38:41.627373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 21:38:41.627380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-27 21:38:41.627390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-27 21:38:41.627396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:38:41.627406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:38:41.627415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-27 21:38:41.627422 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.627428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-27 21:38:41.627435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 21:38:41.627447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:38:41.627453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:38:41.627460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 21:38:41.627469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-27 21:38:41.627480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-27 21:38:41.627486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:38:41.627496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:38:41.627503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-27 21:38:41.627509 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.627515 | orchestrator | 2025-09-27 21:38:41.627522 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-09-27 21:38:41.627528 | orchestrator | Saturday 27 September 2025 21:37:31 +0000 (0:00:01.277) 0:05:00.794 **** 2025-09-27 21:38:41.627534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-27 21:38:41.627541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-27 21:38:41.627547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-27 21:38:41.627554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-27 21:38:41.627561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-27 21:38:41.627567 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.627576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-27 21:38:41.627585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-27 21:38:41.627592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-27 21:38:41.627603 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.627610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-27 21:38:41.627616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-27 21:38:41.627622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-27 21:38:41.627629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-27 21:38:41.627635 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.627641 | orchestrator | 2025-09-27 21:38:41.627648 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-09-27 21:38:41.627657 | orchestrator | Saturday 27 September 2025 21:37:32 +0000 (0:00:00.999) 0:05:01.794 **** 2025-09-27 21:38:41.627663 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.627669 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.627675 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.627681 | orchestrator | 2025-09-27 21:38:41.627688 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-09-27 21:38:41.627694 | orchestrator | Saturday 27 September 2025 21:37:33 +0000 (0:00:00.459) 0:05:02.254 **** 2025-09-27 21:38:41.627700 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.627706 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.627712 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.627718 | orchestrator | 2025-09-27 21:38:41.627725 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-09-27 21:38:41.627731 | orchestrator | Saturday 27 September 2025 21:37:34 +0000 (0:00:01.489) 0:05:03.743 **** 2025-09-27 21:38:41.627737 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:38:41.627743 | orchestrator | 2025-09-27 21:38:41.627749 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-09-27 21:38:41.627755 | orchestrator | Saturday 27 September 2025 21:37:36 +0000 (0:00:01.671) 0:05:05.415 **** 2025-09-27 21:38:41.627762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-27 21:38:41.627776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-27 21:38:41.627786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-27 21:38:41.627793 | orchestrator | 2025-09-27 21:38:41.627800 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-09-27 21:38:41.627806 | orchestrator | Saturday 27 September 2025 21:37:38 +0000 (0:00:02.355) 0:05:07.770 **** 2025-09-27 21:38:41.627812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-27 21:38:41.627819 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.627825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-27 21:38:41.627832 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.627848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-27 21:38:41.627855 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.627861 | orchestrator | 2025-09-27 21:38:41.627867 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-09-27 21:38:41.627873 | orchestrator | Saturday 27 September 2025 21:37:39 +0000 (0:00:00.416) 0:05:08.187 **** 2025-09-27 21:38:41.627879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-27 21:38:41.627886 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.627892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-27 21:38:41.627898 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.627904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-27 21:38:41.627911 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.627917 | orchestrator | 2025-09-27 21:38:41.627923 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-09-27 21:38:41.627929 | orchestrator | Saturday 27 September 2025 21:37:40 +0000 (0:00:00.955) 0:05:09.142 **** 2025-09-27 21:38:41.627935 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.627941 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.627947 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.627953 | orchestrator | 2025-09-27 21:38:41.627959 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-09-27 21:38:41.627966 | orchestrator | Saturday 27 September 2025 21:37:40 +0000 (0:00:00.433) 0:05:09.575 **** 2025-09-27 21:38:41.627972 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.627978 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.627984 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.627990 | orchestrator | 2025-09-27 21:38:41.627996 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-09-27 21:38:41.628002 | orchestrator | Saturday 27 September 2025 21:37:41 +0000 (0:00:01.266) 0:05:10.842 **** 2025-09-27 21:38:41.628009 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:38:41.628015 | orchestrator | 2025-09-27 21:38:41.628021 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-09-27 21:38:41.628027 | orchestrator | Saturday 27 September 2025 21:37:43 +0000 (0:00:01.892) 0:05:12.734 **** 2025-09-27 21:38:41.628033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-27 21:38:41.628050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-27 21:38:41.628057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-27 21:38:41.628064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-27 21:38:41.628071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-27 21:38:41.628081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-27 21:38:41.628088 | orchestrator | 2025-09-27 21:38:41.628096 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-09-27 21:38:41.628103 | orchestrator | Saturday 27 September 2025 21:37:50 +0000 (0:00:06.167) 0:05:18.902 **** 2025-09-27 21:38:41.628112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-27 21:38:41.628118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-27 21:38:41.628125 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.628131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-27 21:38:41.628142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-27 21:38:41.628148 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.628160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-27 21:38:41.628167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-27 21:38:41.628204 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.628212 | orchestrator | 2025-09-27 21:38:41.628218 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-09-27 21:38:41.628225 | orchestrator | Saturday 27 September 2025 21:37:50 +0000 (0:00:00.648) 0:05:19.550 **** 2025-09-27 21:38:41.628231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-27 21:38:41.628238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-27 21:38:41.628244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-27 21:38:41.628257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-27 21:38:41.628263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-27 21:38:41.628269 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.628276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-27 21:38:41.628282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-27 21:38:41.628288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-27 21:38:41.628295 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.628301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-27 21:38:41.628311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-27 21:38:41.628317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-27 21:38:41.628327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-27 21:38:41.628333 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.628339 | orchestrator | 2025-09-27 21:38:41.628345 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-09-27 21:38:41.628352 | orchestrator | Saturday 27 September 2025 21:37:52 +0000 (0:00:01.686) 0:05:21.237 **** 2025-09-27 21:38:41.628358 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.628364 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.628370 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.628376 | orchestrator | 2025-09-27 21:38:41.628382 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-09-27 21:38:41.628388 | orchestrator | Saturday 27 September 2025 21:37:53 +0000 (0:00:01.378) 0:05:22.615 **** 2025-09-27 21:38:41.628394 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.628401 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.628407 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.628413 | orchestrator | 2025-09-27 21:38:41.628419 | orchestrator | TASK [include_role : swift] **************************************************** 2025-09-27 21:38:41.628425 | orchestrator | Saturday 27 September 2025 21:37:56 +0000 (0:00:02.298) 0:05:24.913 **** 2025-09-27 21:38:41.628431 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.628438 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.628444 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.628450 | orchestrator | 2025-09-27 21:38:41.628456 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-09-27 21:38:41.628466 | orchestrator | Saturday 27 September 2025 21:37:56 +0000 (0:00:00.336) 0:05:25.250 **** 2025-09-27 21:38:41.628472 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.628478 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.628484 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.628490 | orchestrator | 2025-09-27 21:38:41.628497 | orchestrator | TASK [include_role : trove] **************************************************** 2025-09-27 21:38:41.628503 | orchestrator | Saturday 27 September 2025 21:37:56 +0000 (0:00:00.291) 0:05:25.542 **** 2025-09-27 21:38:41.628509 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.628515 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.628521 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.628527 | orchestrator | 2025-09-27 21:38:41.628533 | orchestrator | TASK [include_role : venus] **************************************************** 2025-09-27 21:38:41.628539 | orchestrator | Saturday 27 September 2025 21:37:57 +0000 (0:00:00.593) 0:05:26.135 **** 2025-09-27 21:38:41.628546 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.628552 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.628558 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.628564 | orchestrator | 2025-09-27 21:38:41.628570 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-09-27 21:38:41.628576 | orchestrator | Saturday 27 September 2025 21:37:57 +0000 (0:00:00.331) 0:05:26.467 **** 2025-09-27 21:38:41.628582 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.628588 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.628594 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.628601 | orchestrator | 2025-09-27 21:38:41.628607 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-09-27 21:38:41.628613 | orchestrator | Saturday 27 September 2025 21:37:57 +0000 (0:00:00.307) 0:05:26.775 **** 2025-09-27 21:38:41.628619 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.628625 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.628630 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.628635 | orchestrator | 2025-09-27 21:38:41.628641 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-09-27 21:38:41.628646 | orchestrator | Saturday 27 September 2025 21:37:58 +0000 (0:00:00.787) 0:05:27.562 **** 2025-09-27 21:38:41.628652 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:38:41.628657 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:38:41.628662 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:38:41.628668 | orchestrator | 2025-09-27 21:38:41.628673 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-09-27 21:38:41.628679 | orchestrator | Saturday 27 September 2025 21:37:59 +0000 (0:00:00.696) 0:05:28.258 **** 2025-09-27 21:38:41.628684 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:38:41.628689 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:38:41.628695 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:38:41.628700 | orchestrator | 2025-09-27 21:38:41.628706 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-09-27 21:38:41.628711 | orchestrator | Saturday 27 September 2025 21:37:59 +0000 (0:00:00.376) 0:05:28.635 **** 2025-09-27 21:38:41.628716 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:38:41.628722 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:38:41.628727 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:38:41.628733 | orchestrator | 2025-09-27 21:38:41.628738 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-09-27 21:38:41.628743 | orchestrator | Saturday 27 September 2025 21:38:00 +0000 (0:00:00.936) 0:05:29.572 **** 2025-09-27 21:38:41.628749 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:38:41.628754 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:38:41.628759 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:38:41.628765 | orchestrator | 2025-09-27 21:38:41.628770 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-09-27 21:38:41.628776 | orchestrator | Saturday 27 September 2025 21:38:01 +0000 (0:00:01.174) 0:05:30.746 **** 2025-09-27 21:38:41.628784 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:38:41.628790 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:38:41.628797 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:38:41.628803 | orchestrator | 2025-09-27 21:38:41.628808 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-09-27 21:38:41.628814 | orchestrator | Saturday 27 September 2025 21:38:02 +0000 (0:00:01.096) 0:05:31.842 **** 2025-09-27 21:38:41.628819 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.628825 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.628830 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.628836 | orchestrator | 2025-09-27 21:38:41.628841 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-09-27 21:38:41.628849 | orchestrator | Saturday 27 September 2025 21:38:12 +0000 (0:00:09.504) 0:05:41.346 **** 2025-09-27 21:38:41.628855 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:38:41.628860 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:38:41.628866 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:38:41.628871 | orchestrator | 2025-09-27 21:38:41.628876 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-09-27 21:38:41.628882 | orchestrator | Saturday 27 September 2025 21:38:13 +0000 (0:00:00.817) 0:05:42.164 **** 2025-09-27 21:38:41.628887 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.628892 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.628898 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.628903 | orchestrator | 2025-09-27 21:38:41.628909 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-09-27 21:38:41.628914 | orchestrator | Saturday 27 September 2025 21:38:25 +0000 (0:00:12.582) 0:05:54.746 **** 2025-09-27 21:38:41.628919 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:38:41.628925 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:38:41.628930 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:38:41.628936 | orchestrator | 2025-09-27 21:38:41.628941 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-09-27 21:38:41.628946 | orchestrator | Saturday 27 September 2025 21:38:26 +0000 (0:00:01.110) 0:05:55.857 **** 2025-09-27 21:38:41.628952 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:38:41.628957 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:38:41.628962 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:38:41.628968 | orchestrator | 2025-09-27 21:38:41.628973 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-09-27 21:38:41.628978 | orchestrator | Saturday 27 September 2025 21:38:36 +0000 (0:00:09.236) 0:06:05.093 **** 2025-09-27 21:38:41.628984 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.628989 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.628994 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.629000 | orchestrator | 2025-09-27 21:38:41.629005 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-09-27 21:38:41.629011 | orchestrator | Saturday 27 September 2025 21:38:36 +0000 (0:00:00.304) 0:06:05.397 **** 2025-09-27 21:38:41.629016 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.629021 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.629027 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.629032 | orchestrator | 2025-09-27 21:38:41.629037 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-09-27 21:38:41.629043 | orchestrator | Saturday 27 September 2025 21:38:36 +0000 (0:00:00.295) 0:06:05.692 **** 2025-09-27 21:38:41.629048 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.629054 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.629059 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.629064 | orchestrator | 2025-09-27 21:38:41.629070 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-09-27 21:38:41.629075 | orchestrator | Saturday 27 September 2025 21:38:37 +0000 (0:00:00.497) 0:06:06.190 **** 2025-09-27 21:38:41.629080 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.629091 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.629096 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.629102 | orchestrator | 2025-09-27 21:38:41.629107 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-09-27 21:38:41.629112 | orchestrator | Saturday 27 September 2025 21:38:37 +0000 (0:00:00.301) 0:06:06.492 **** 2025-09-27 21:38:41.629118 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.629123 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.629129 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.629134 | orchestrator | 2025-09-27 21:38:41.629140 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-09-27 21:38:41.629145 | orchestrator | Saturday 27 September 2025 21:38:37 +0000 (0:00:00.299) 0:06:06.791 **** 2025-09-27 21:38:41.629150 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:38:41.629156 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:38:41.629161 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:38:41.629167 | orchestrator | 2025-09-27 21:38:41.629172 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-09-27 21:38:41.629185 | orchestrator | Saturday 27 September 2025 21:38:38 +0000 (0:00:00.283) 0:06:07.075 **** 2025-09-27 21:38:41.629191 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:38:41.629196 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:38:41.629202 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:38:41.629207 | orchestrator | 2025-09-27 21:38:41.629212 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-09-27 21:38:41.629218 | orchestrator | Saturday 27 September 2025 21:38:39 +0000 (0:00:01.130) 0:06:08.206 **** 2025-09-27 21:38:41.629223 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:38:41.629229 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:38:41.629234 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:38:41.629239 | orchestrator | 2025-09-27 21:38:41.629245 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:38:41.629250 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-27 21:38:41.629256 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-27 21:38:41.629261 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-27 21:38:41.629267 | orchestrator | 2025-09-27 21:38:41.629272 | orchestrator | 2025-09-27 21:38:41.629280 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:38:41.629286 | orchestrator | Saturday 27 September 2025 21:38:40 +0000 (0:00:00.751) 0:06:08.958 **** 2025-09-27 21:38:41.629291 | orchestrator | =============================================================================== 2025-09-27 21:38:41.629296 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 12.58s 2025-09-27 21:38:41.629304 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.50s 2025-09-27 21:38:41.629310 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.24s 2025-09-27 21:38:41.629315 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.17s 2025-09-27 21:38:41.629321 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.32s 2025-09-27 21:38:41.629326 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.17s 2025-09-27 21:38:41.629331 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.51s 2025-09-27 21:38:41.629337 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.40s 2025-09-27 21:38:41.629342 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 4.37s 2025-09-27 21:38:41.629347 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.34s 2025-09-27 21:38:41.629356 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.32s 2025-09-27 21:38:41.629362 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.30s 2025-09-27 21:38:41.629367 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 4.29s 2025-09-27 21:38:41.629372 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.15s 2025-09-27 21:38:41.629377 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 4.03s 2025-09-27 21:38:41.629383 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.00s 2025-09-27 21:38:41.629388 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 3.98s 2025-09-27 21:38:41.629393 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.76s 2025-09-27 21:38:41.629399 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 3.65s 2025-09-27 21:38:41.629404 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.59s 2025-09-27 21:38:44.660446 | orchestrator | 2025-09-27 21:38:44 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:38:44.661519 | orchestrator | 2025-09-27 21:38:44 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:38:44.663667 | orchestrator | 2025-09-27 21:38:44 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:38:44.663695 | orchestrator | 2025-09-27 21:38:44 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:38:47.695249 | orchestrator | 2025-09-27 21:38:47 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:38:47.695318 | orchestrator | 2025-09-27 21:38:47 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:38:47.695892 | orchestrator | 2025-09-27 21:38:47 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:38:47.695923 | orchestrator | 2025-09-27 21:38:47 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:38:50.830415 | orchestrator | 2025-09-27 21:38:50 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:38:50.830669 | orchestrator | 2025-09-27 21:38:50 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:38:50.831621 | orchestrator | 2025-09-27 21:38:50 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:38:50.831665 | orchestrator | 2025-09-27 21:38:50 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:38:53.871473 | orchestrator | 2025-09-27 21:38:53 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:38:53.871573 | orchestrator | 2025-09-27 21:38:53 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:38:53.872329 | orchestrator | 2025-09-27 21:38:53 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:38:53.872408 | orchestrator | 2025-09-27 21:38:53 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:38:56.928705 | orchestrator | 2025-09-27 21:38:56 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:38:56.930590 | orchestrator | 2025-09-27 21:38:56 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:38:56.933619 | orchestrator | 2025-09-27 21:38:56 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:38:56.933650 | orchestrator | 2025-09-27 21:38:56 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:38:59.969539 | orchestrator | 2025-09-27 21:38:59 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:38:59.969685 | orchestrator | 2025-09-27 21:38:59 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:38:59.971785 | orchestrator | 2025-09-27 21:38:59 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:38:59.971810 | orchestrator | 2025-09-27 21:38:59 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:39:03.003645 | orchestrator | 2025-09-27 21:39:03 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:39:03.004616 | orchestrator | 2025-09-27 21:39:03 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:39:03.005387 | orchestrator | 2025-09-27 21:39:03 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:39:03.005410 | orchestrator | 2025-09-27 21:39:03 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:39:06.036873 | orchestrator | 2025-09-27 21:39:06 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:39:06.037204 | orchestrator | 2025-09-27 21:39:06 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:39:06.037722 | orchestrator | 2025-09-27 21:39:06 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:39:06.037762 | orchestrator | 2025-09-27 21:39:06 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:39:09.072700 | orchestrator | 2025-09-27 21:39:09 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:39:09.074605 | orchestrator | 2025-09-27 21:39:09 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:39:09.077035 | orchestrator | 2025-09-27 21:39:09 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:39:09.077063 | orchestrator | 2025-09-27 21:39:09 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:39:12.113037 | orchestrator | 2025-09-27 21:39:12 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:39:12.113584 | orchestrator | 2025-09-27 21:39:12 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:39:12.116823 | orchestrator | 2025-09-27 21:39:12 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:39:12.116856 | orchestrator | 2025-09-27 21:39:12 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:39:15.150920 | orchestrator | 2025-09-27 21:39:15 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:39:15.152262 | orchestrator | 2025-09-27 21:39:15 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:39:15.154294 | orchestrator | 2025-09-27 21:39:15 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:39:15.154336 | orchestrator | 2025-09-27 21:39:15 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:39:18.183283 | orchestrator | 2025-09-27 21:39:18 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:39:18.184663 | orchestrator | 2025-09-27 21:39:18 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:39:18.186667 | orchestrator | 2025-09-27 21:39:18 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:39:18.186821 | orchestrator | 2025-09-27 21:39:18 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:39:21.222388 | orchestrator | 2025-09-27 21:39:21 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:39:21.223687 | orchestrator | 2025-09-27 21:39:21 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:39:21.225801 | orchestrator | 2025-09-27 21:39:21 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:39:21.225826 | orchestrator | 2025-09-27 21:39:21 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:39:24.273496 | orchestrator | 2025-09-27 21:39:24 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:39:24.276067 | orchestrator | 2025-09-27 21:39:24 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:39:24.277777 | orchestrator | 2025-09-27 21:39:24 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:39:24.277922 | orchestrator | 2025-09-27 21:39:24 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:39:27.321277 | orchestrator | 2025-09-27 21:39:27 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:39:27.324622 | orchestrator | 2025-09-27 21:39:27 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:39:27.326527 | orchestrator | 2025-09-27 21:39:27 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:39:27.326626 | orchestrator | 2025-09-27 21:39:27 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:39:30.365903 | orchestrator | 2025-09-27 21:39:30 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:39:30.366598 | orchestrator | 2025-09-27 21:39:30 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:39:30.367508 | orchestrator | 2025-09-27 21:39:30 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:39:30.367529 | orchestrator | 2025-09-27 21:39:30 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:39:33.411740 | orchestrator | 2025-09-27 21:39:33 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:39:33.413873 | orchestrator | 2025-09-27 21:39:33 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:39:33.416169 | orchestrator | 2025-09-27 21:39:33 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:39:33.416203 | orchestrator | 2025-09-27 21:39:33 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:39:36.461192 | orchestrator | 2025-09-27 21:39:36 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:39:36.461625 | orchestrator | 2025-09-27 21:39:36 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:39:36.462914 | orchestrator | 2025-09-27 21:39:36 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:39:36.462937 | orchestrator | 2025-09-27 21:39:36 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:39:39.502607 | orchestrator | 2025-09-27 21:39:39 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:39:39.504394 | orchestrator | 2025-09-27 21:39:39 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:39:39.506587 | orchestrator | 2025-09-27 21:39:39 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:39:39.506846 | orchestrator | 2025-09-27 21:39:39 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:39:42.546574 | orchestrator | 2025-09-27 21:39:42 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:39:42.548106 | orchestrator | 2025-09-27 21:39:42 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:39:42.549343 | orchestrator | 2025-09-27 21:39:42 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:39:42.549691 | orchestrator | 2025-09-27 21:39:42 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:39:45.595900 | orchestrator | 2025-09-27 21:39:45 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:39:45.597808 | orchestrator | 2025-09-27 21:39:45 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:39:45.599571 | orchestrator | 2025-09-27 21:39:45 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:39:45.599703 | orchestrator | 2025-09-27 21:39:45 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:39:48.637859 | orchestrator | 2025-09-27 21:39:48 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:39:48.639644 | orchestrator | 2025-09-27 21:39:48 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:39:48.641288 | orchestrator | 2025-09-27 21:39:48 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:39:48.641610 | orchestrator | 2025-09-27 21:39:48 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:39:51.694629 | orchestrator | 2025-09-27 21:39:51 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:39:51.697699 | orchestrator | 2025-09-27 21:39:51 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:39:51.699939 | orchestrator | 2025-09-27 21:39:51 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:39:51.699961 | orchestrator | 2025-09-27 21:39:51 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:39:54.752050 | orchestrator | 2025-09-27 21:39:54 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:39:54.753855 | orchestrator | 2025-09-27 21:39:54 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:39:54.755320 | orchestrator | 2025-09-27 21:39:54 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:39:54.755772 | orchestrator | 2025-09-27 21:39:54 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:39:57.802618 | orchestrator | 2025-09-27 21:39:57 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:39:57.804004 | orchestrator | 2025-09-27 21:39:57 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:39:57.805394 | orchestrator | 2025-09-27 21:39:57 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:39:57.805447 | orchestrator | 2025-09-27 21:39:57 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:40:00.847474 | orchestrator | 2025-09-27 21:40:00 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:40:00.848503 | orchestrator | 2025-09-27 21:40:00 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:40:00.849240 | orchestrator | 2025-09-27 21:40:00 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:40:00.849893 | orchestrator | 2025-09-27 21:40:00 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:40:03.890655 | orchestrator | 2025-09-27 21:40:03 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:40:03.894199 | orchestrator | 2025-09-27 21:40:03 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:40:03.897841 | orchestrator | 2025-09-27 21:40:03 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:40:03.897930 | orchestrator | 2025-09-27 21:40:03 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:40:06.932334 | orchestrator | 2025-09-27 21:40:06 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:40:06.935395 | orchestrator | 2025-09-27 21:40:06 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:40:06.937268 | orchestrator | 2025-09-27 21:40:06 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:40:06.937284 | orchestrator | 2025-09-27 21:40:06 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:40:09.962463 | orchestrator | 2025-09-27 21:40:09 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:40:09.963897 | orchestrator | 2025-09-27 21:40:09 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:40:09.965492 | orchestrator | 2025-09-27 21:40:09 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:40:09.965600 | orchestrator | 2025-09-27 21:40:09 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:40:12.998086 | orchestrator | 2025-09-27 21:40:12 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:40:12.999830 | orchestrator | 2025-09-27 21:40:12 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:40:13.001135 | orchestrator | 2025-09-27 21:40:13 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:40:13.001164 | orchestrator | 2025-09-27 21:40:13 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:40:16.054339 | orchestrator | 2025-09-27 21:40:16 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:40:16.055840 | orchestrator | 2025-09-27 21:40:16 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:40:16.057322 | orchestrator | 2025-09-27 21:40:16 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:40:16.057417 | orchestrator | 2025-09-27 21:40:16 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:40:19.090789 | orchestrator | 2025-09-27 21:40:19 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:40:19.092115 | orchestrator | 2025-09-27 21:40:19 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:40:19.094404 | orchestrator | 2025-09-27 21:40:19 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:40:19.094474 | orchestrator | 2025-09-27 21:40:19 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:40:22.138936 | orchestrator | 2025-09-27 21:40:22 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:40:22.142208 | orchestrator | 2025-09-27 21:40:22 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:40:22.144582 | orchestrator | 2025-09-27 21:40:22 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:40:22.144634 | orchestrator | 2025-09-27 21:40:22 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:40:25.193773 | orchestrator | 2025-09-27 21:40:25 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:40:25.195539 | orchestrator | 2025-09-27 21:40:25 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:40:25.197446 | orchestrator | 2025-09-27 21:40:25 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:40:25.197481 | orchestrator | 2025-09-27 21:40:25 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:40:28.242154 | orchestrator | 2025-09-27 21:40:28 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:40:28.243413 | orchestrator | 2025-09-27 21:40:28 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:40:28.245474 | orchestrator | 2025-09-27 21:40:28 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:40:28.245497 | orchestrator | 2025-09-27 21:40:28 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:40:31.282753 | orchestrator | 2025-09-27 21:40:31 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:40:31.284063 | orchestrator | 2025-09-27 21:40:31 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:40:31.285902 | orchestrator | 2025-09-27 21:40:31 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:40:31.285940 | orchestrator | 2025-09-27 21:40:31 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:40:34.330798 | orchestrator | 2025-09-27 21:40:34 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:40:34.332400 | orchestrator | 2025-09-27 21:40:34 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:40:34.334991 | orchestrator | 2025-09-27 21:40:34 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:40:34.335024 | orchestrator | 2025-09-27 21:40:34 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:40:37.384047 | orchestrator | 2025-09-27 21:40:37 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:40:37.385144 | orchestrator | 2025-09-27 21:40:37 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:40:37.388193 | orchestrator | 2025-09-27 21:40:37 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:40:37.388453 | orchestrator | 2025-09-27 21:40:37 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:40:40.448328 | orchestrator | 2025-09-27 21:40:40 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:40:40.452879 | orchestrator | 2025-09-27 21:40:40 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:40:40.456264 | orchestrator | 2025-09-27 21:40:40 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:40:40.456328 | orchestrator | 2025-09-27 21:40:40 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:40:43.506884 | orchestrator | 2025-09-27 21:40:43 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:40:43.508393 | orchestrator | 2025-09-27 21:40:43 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:40:43.511025 | orchestrator | 2025-09-27 21:40:43 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:40:43.511058 | orchestrator | 2025-09-27 21:40:43 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:40:46.552834 | orchestrator | 2025-09-27 21:40:46 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:40:46.553917 | orchestrator | 2025-09-27 21:40:46 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:40:46.555383 | orchestrator | 2025-09-27 21:40:46 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:40:46.555412 | orchestrator | 2025-09-27 21:40:46 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:40:49.599527 | orchestrator | 2025-09-27 21:40:49 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:40:49.601371 | orchestrator | 2025-09-27 21:40:49 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:40:49.602682 | orchestrator | 2025-09-27 21:40:49 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:40:49.602735 | orchestrator | 2025-09-27 21:40:49 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:40:52.642393 | orchestrator | 2025-09-27 21:40:52 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:40:52.643983 | orchestrator | 2025-09-27 21:40:52 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:40:52.646559 | orchestrator | 2025-09-27 21:40:52 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:40:52.646811 | orchestrator | 2025-09-27 21:40:52 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:40:55.686284 | orchestrator | 2025-09-27 21:40:55 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:40:55.688725 | orchestrator | 2025-09-27 21:40:55 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:40:55.690452 | orchestrator | 2025-09-27 21:40:55 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:40:55.690530 | orchestrator | 2025-09-27 21:40:55 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:40:58.742631 | orchestrator | 2025-09-27 21:40:58 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:40:58.742740 | orchestrator | 2025-09-27 21:40:58 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:40:58.743378 | orchestrator | 2025-09-27 21:40:58 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state STARTED 2025-09-27 21:40:58.743848 | orchestrator | 2025-09-27 21:40:58 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:41:01.784886 | orchestrator | 2025-09-27 21:41:01 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:41:01.786100 | orchestrator | 2025-09-27 21:41:01 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:41:01.788964 | orchestrator | 2025-09-27 21:41:01 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:41:01.795282 | orchestrator | 2025-09-27 21:41:01 | INFO  | Task 27cf1923-62e1-40b9-8df6-6d4ad9702aad is in state SUCCESS 2025-09-27 21:41:01.796518 | orchestrator | 2025-09-27 21:41:01.796558 | orchestrator | 2025-09-27 21:41:01.796570 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-09-27 21:41:01.796714 | orchestrator | 2025-09-27 21:41:01.796731 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-27 21:41:01.796743 | orchestrator | Saturday 27 September 2025 21:30:14 +0000 (0:00:00.773) 0:00:00.773 **** 2025-09-27 21:41:01.796755 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:01.796779 | orchestrator | 2025-09-27 21:41:01.796790 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-27 21:41:01.796801 | orchestrator | Saturday 27 September 2025 21:30:15 +0000 (0:00:01.303) 0:00:02.077 **** 2025-09-27 21:41:01.796812 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.796825 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.796836 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.796901 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.796914 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.796925 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.796962 | orchestrator | 2025-09-27 21:41:01.796973 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-27 21:41:01.797027 | orchestrator | Saturday 27 September 2025 21:30:17 +0000 (0:00:01.623) 0:00:03.701 **** 2025-09-27 21:41:01.797040 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.797071 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.797083 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.797096 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.797108 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.797120 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.797132 | orchestrator | 2025-09-27 21:41:01.797144 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-27 21:41:01.797198 | orchestrator | Saturday 27 September 2025 21:30:18 +0000 (0:00:00.719) 0:00:04.420 **** 2025-09-27 21:41:01.797212 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.797224 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.797262 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.797275 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.797288 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.797300 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.797313 | orchestrator | 2025-09-27 21:41:01.797354 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-27 21:41:01.797366 | orchestrator | Saturday 27 September 2025 21:30:19 +0000 (0:00:00.796) 0:00:05.217 **** 2025-09-27 21:41:01.797378 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.797390 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.797403 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.797415 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.797427 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.797438 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.797548 | orchestrator | 2025-09-27 21:41:01.797560 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-27 21:41:01.797571 | orchestrator | Saturday 27 September 2025 21:30:19 +0000 (0:00:00.567) 0:00:05.785 **** 2025-09-27 21:41:01.797582 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.797593 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.797603 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.797626 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.797638 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.797648 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.797659 | orchestrator | 2025-09-27 21:41:01.797670 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-27 21:41:01.797681 | orchestrator | Saturday 27 September 2025 21:30:20 +0000 (0:00:00.532) 0:00:06.317 **** 2025-09-27 21:41:01.797692 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.797702 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.797713 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.797724 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.797735 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.797756 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.797767 | orchestrator | 2025-09-27 21:41:01.797778 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-27 21:41:01.797790 | orchestrator | Saturday 27 September 2025 21:30:21 +0000 (0:00:00.883) 0:00:07.200 **** 2025-09-27 21:41:01.797801 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.797812 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.797823 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.797834 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.797845 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.798001 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.798012 | orchestrator | 2025-09-27 21:41:01.798146 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-27 21:41:01.798159 | orchestrator | Saturday 27 September 2025 21:30:21 +0000 (0:00:00.881) 0:00:08.084 **** 2025-09-27 21:41:01.798170 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.798181 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.798211 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.798223 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.798233 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.798244 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.798255 | orchestrator | 2025-09-27 21:41:01.798290 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-27 21:41:01.798309 | orchestrator | Saturday 27 September 2025 21:30:22 +0000 (0:00:00.838) 0:00:08.922 **** 2025-09-27 21:41:01.798320 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-27 21:41:01.798331 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-27 21:41:01.798386 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-27 21:41:01.798398 | orchestrator | 2025-09-27 21:41:01.798409 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-27 21:41:01.798420 | orchestrator | Saturday 27 September 2025 21:30:23 +0000 (0:00:00.592) 0:00:09.514 **** 2025-09-27 21:41:01.798431 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.798442 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.798453 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.798463 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.798474 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.798485 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.798496 | orchestrator | 2025-09-27 21:41:01.798521 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-27 21:41:01.798533 | orchestrator | Saturday 27 September 2025 21:30:24 +0000 (0:00:00.915) 0:00:10.430 **** 2025-09-27 21:41:01.798544 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-27 21:41:01.798555 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-27 21:41:01.798566 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-27 21:41:01.798577 | orchestrator | 2025-09-27 21:41:01.798588 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-27 21:41:01.798599 | orchestrator | Saturday 27 September 2025 21:30:27 +0000 (0:00:03.281) 0:00:13.712 **** 2025-09-27 21:41:01.798610 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-27 21:41:01.798620 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-27 21:41:01.798631 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-27 21:41:01.798642 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.798653 | orchestrator | 2025-09-27 21:41:01.798663 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-27 21:41:01.798675 | orchestrator | Saturday 27 September 2025 21:30:27 +0000 (0:00:00.441) 0:00:14.154 **** 2025-09-27 21:41:01.798687 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.798701 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.798712 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.798723 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.798791 | orchestrator | 2025-09-27 21:41:01.798802 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-27 21:41:01.798813 | orchestrator | Saturday 27 September 2025 21:30:28 +0000 (0:00:00.713) 0:00:14.868 **** 2025-09-27 21:41:01.798827 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.798855 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.798991 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.799029 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.799075 | orchestrator | 2025-09-27 21:41:01.799090 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-27 21:41:01.799101 | orchestrator | Saturday 27 September 2025 21:30:28 +0000 (0:00:00.233) 0:00:15.102 **** 2025-09-27 21:41:01.799115 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-27 21:30:24.911911', 'end': '2025-09-27 21:30:25.176275', 'delta': '0:00:00.264364', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.799274 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-27 21:30:25.726450', 'end': '2025-09-27 21:30:25.991034', 'delta': '0:00:00.264584', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.799295 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-27 21:30:27.050652', 'end': '2025-09-27 21:30:27.333562', 'delta': '0:00:00.282910', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.799307 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.799319 | orchestrator | 2025-09-27 21:41:01.799330 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-27 21:41:01.799352 | orchestrator | Saturday 27 September 2025 21:30:29 +0000 (0:00:00.491) 0:00:15.594 **** 2025-09-27 21:41:01.799363 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.799373 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.799384 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.799395 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.799406 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.799417 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.799428 | orchestrator | 2025-09-27 21:41:01.799439 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-27 21:41:01.799450 | orchestrator | Saturday 27 September 2025 21:30:31 +0000 (0:00:02.015) 0:00:17.609 **** 2025-09-27 21:41:01.799461 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.799472 | orchestrator | 2025-09-27 21:41:01.799482 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-27 21:41:01.799500 | orchestrator | Saturday 27 September 2025 21:30:32 +0000 (0:00:00.661) 0:00:18.271 **** 2025-09-27 21:41:01.799511 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.799522 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.799533 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.799544 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.799555 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.799566 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.799576 | orchestrator | 2025-09-27 21:41:01.799587 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-27 21:41:01.799598 | orchestrator | Saturday 27 September 2025 21:30:34 +0000 (0:00:01.926) 0:00:20.198 **** 2025-09-27 21:41:01.799609 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.799620 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.799631 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.799642 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.799653 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.799663 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.799674 | orchestrator | 2025-09-27 21:41:01.799685 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-27 21:41:01.799696 | orchestrator | Saturday 27 September 2025 21:30:35 +0000 (0:00:01.585) 0:00:21.783 **** 2025-09-27 21:41:01.799706 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.799717 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.799728 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.799739 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.799750 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.799761 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.799771 | orchestrator | 2025-09-27 21:41:01.799782 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-27 21:41:01.799793 | orchestrator | Saturday 27 September 2025 21:30:36 +0000 (0:00:00.798) 0:00:22.582 **** 2025-09-27 21:41:01.799804 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.799814 | orchestrator | 2025-09-27 21:41:01.799825 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-27 21:41:01.799836 | orchestrator | Saturday 27 September 2025 21:30:36 +0000 (0:00:00.179) 0:00:22.761 **** 2025-09-27 21:41:01.799847 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.799858 | orchestrator | 2025-09-27 21:41:01.799868 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-27 21:41:01.799879 | orchestrator | Saturday 27 September 2025 21:30:36 +0000 (0:00:00.215) 0:00:22.977 **** 2025-09-27 21:41:01.799890 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.799901 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.799917 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.799937 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.799955 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.799978 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.799999 | orchestrator | 2025-09-27 21:41:01.800017 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-27 21:41:01.800101 | orchestrator | Saturday 27 September 2025 21:30:37 +0000 (0:00:00.637) 0:00:23.614 **** 2025-09-27 21:41:01.800124 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.800144 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.800166 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.800186 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.800205 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.800223 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.800235 | orchestrator | 2025-09-27 21:41:01.800246 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-27 21:41:01.800256 | orchestrator | Saturday 27 September 2025 21:30:38 +0000 (0:00:00.684) 0:00:24.299 **** 2025-09-27 21:41:01.800267 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.800278 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.800291 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.800311 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.800329 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.800346 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.800365 | orchestrator | 2025-09-27 21:41:01.800385 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-27 21:41:01.800404 | orchestrator | Saturday 27 September 2025 21:30:39 +0000 (0:00:00.885) 0:00:25.184 **** 2025-09-27 21:41:01.800422 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.800434 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.800444 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.800455 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.800466 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.800477 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.800487 | orchestrator | 2025-09-27 21:41:01.800498 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-27 21:41:01.800509 | orchestrator | Saturday 27 September 2025 21:30:39 +0000 (0:00:00.890) 0:00:26.075 **** 2025-09-27 21:41:01.800520 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.800531 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.800542 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.800552 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.800564 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.800574 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.800585 | orchestrator | 2025-09-27 21:41:01.800596 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-27 21:41:01.800607 | orchestrator | Saturday 27 September 2025 21:30:40 +0000 (0:00:00.662) 0:00:26.738 **** 2025-09-27 21:41:01.800618 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.800629 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.800639 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.800650 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.800661 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.800672 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.800683 | orchestrator | 2025-09-27 21:41:01.800694 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-27 21:41:01.800705 | orchestrator | Saturday 27 September 2025 21:30:41 +0000 (0:00:00.824) 0:00:27.563 **** 2025-09-27 21:41:01.800716 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.800727 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.800746 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.800757 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.800768 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.800779 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.800790 | orchestrator | 2025-09-27 21:41:01.800800 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-27 21:41:01.800811 | orchestrator | Saturday 27 September 2025 21:30:42 +0000 (0:00:00.646) 0:00:28.209 **** 2025-09-27 21:41:01.800833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.800846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.800857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.800869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.800890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.800902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.800913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.800924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.800955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55c19aba-e9c5-4402-abdf-da1cbf841e84', 'scsi-SQEMU_QEMU_HARDDISK_55c19aba-e9c5-4402-abdf-da1cbf841e84'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55c19aba-e9c5-4402-abdf-da1cbf841e84-part1', 'scsi-SQEMU_QEMU_HARDDISK_55c19aba-e9c5-4402-abdf-da1cbf841e84-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55c19aba-e9c5-4402-abdf-da1cbf841e84-part14', 'scsi-SQEMU_QEMU_HARDDISK_55c19aba-e9c5-4402-abdf-da1cbf841e84-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55c19aba-e9c5-4402-abdf-da1cbf841e84-part15', 'scsi-SQEMU_QEMU_HARDDISK_55c19aba-e9c5-4402-abdf-da1cbf841e84-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55c19aba-e9c5-4402-abdf-da1cbf841e84-part16', 'scsi-SQEMU_QEMU_HARDDISK_55c19aba-e9c5-4402-abdf-da1cbf841e84-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 21:41:01.801000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-27-20-48-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 21:41:01.801024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.801044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.801082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.801093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.801110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.801129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.801141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.801152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.801174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b04e760-262c-4120-878c-1234782e5052', 'scsi-SQEMU_QEMU_HARDDISK_6b04e760-262c-4120-878c-1234782e5052'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b04e760-262c-4120-878c-1234782e5052-part1', 'scsi-SQEMU_QEMU_HARDDISK_6b04e760-262c-4120-878c-1234782e5052-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b04e760-262c-4120-878c-1234782e5052-part14', 'scsi-SQEMU_QEMU_HARDDISK_6b04e760-262c-4120-878c-1234782e5052-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b04e760-262c-4120-878c-1234782e5052-part15', 'scsi-SQEMU_QEMU_HARDDISK_6b04e760-262c-4120-878c-1234782e5052-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b04e760-262c-4120-878c-1234782e5052-part16', 'scsi-SQEMU_QEMU_HARDDISK_6b04e760-262c-4120-878c-1234782e5052-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 21:41:01.801193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-27-20-48-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 21:41:01.801211 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.801223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.801234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.801246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.801257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.801274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.801286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.801301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.801322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.801359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e37f5674-3a4c-4b27-8c5a-833e99a56bd6', 'scsi-SQEMU_QEMU_HARDDISK_e37f5674-3a4c-4b27-8c5a-833e99a56bd6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e37f5674-3a4c-4b27-8c5a-833e99a56bd6-part1', 'scsi-SQEMU_QEMU_HARDDISK_e37f5674-3a4c-4b27-8c5a-833e99a56bd6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e37f5674-3a4c-4b27-8c5a-833e99a56bd6-part14', 'scsi-SQEMU_QEMU_HARDDISK_e37f5674-3a4c-4b27-8c5a-833e99a56bd6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e37f5674-3a4c-4b27-8c5a-833e99a56bd6-part15', 'scsi-SQEMU_QEMU_HARDDISK_e37f5674-3a4c-4b27-8c5a-833e99a56bd6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e37f5674-3a4c-4b27-8c5a-833e99a56bd6-part16', 'scsi-SQEMU_QEMU_HARDDISK_e37f5674-3a4c-4b27-8c5a-833e99a56bd6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 21:41:01.801390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-27-20-48-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 21:41:01.801412 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.801432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c2ef8475--4f12--50de--ab79--c841a7bfbe3d-osd--block--c2ef8475--4f12--50de--ab79--c841a7bfbe3d', 'dm-uuid-LVM-Yghi5PMNzAUKKjcwKKhcMFpFez4MUhPBir7d0NnBE5iYUlseHvYe1FXazX5do9YF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.801453 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e5968580--5dd1--5a87--a5e5--bc9ba69f72d9-osd--block--e5968580--5dd1--5a87--a5e5--bc9ba69f72d9', 'dm-uuid-LVM-a8TT4Fcz9cVddTCRwzsEcymcLVTFc3bZ8ys5WH9K8T3LrHUjRmzCBXWOjsnEYYz1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.801474 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.801517 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.801530 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.801541 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.801553 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.801564 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.801583 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.801595 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.801615 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163', 'scsi-SQEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163-part1', 'scsi-SQEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163-part14', 'scsi-SQEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163-part15', 'scsi-SQEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163-part16', 'scsi-SQEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 21:41:01.801644 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c2ef8475--4f12--50de--ab79--c841a7bfbe3d-osd--block--c2ef8475--4f12--50de--ab79--c841a7bfbe3d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0SFHLR-LyxF-MjbY-BOat-2ikE-xt70-CdNgyn', 'scsi-0QEMU_QEMU_HARDDISK_a92b9860-302a-4dfa-9a5b-f64375177990', 'scsi-SQEMU_QEMU_HARDDISK_a92b9860-302a-4dfa-9a5b-f64375177990'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 21:41:01.801663 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.801693 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--e5968580--5dd1--5a87--a5e5--bc9ba69f72d9-osd--block--e5968580--5dd1--5a87--a5e5--bc9ba69f72d9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iO0jIg-SzmY-BSev-S2q5-gH03-Ib3M-x0DxmD', 'scsi-0QEMU_QEMU_HARDDISK_1d27bfee-58fc-413a-aadf-ce708d3c762a', 'scsi-SQEMU_QEMU_HARDDISK_1d27bfee-58fc-413a-aadf-ce708d3c762a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 21:41:01.801715 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57fc99d7-7aa7-4d8e-bac5-79cb8f64eb7c', 'scsi-SQEMU_QEMU_HARDDISK_57fc99d7-7aa7-4d8e-bac5-79cb8f64eb7c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 21:41:01.801734 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-27-20-48-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 21:41:01.801753 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--de74169a--f069--5642--ad17--f2f17c514bb2-osd--block--de74169a--f069--5642--ad17--f2f17c514bb2', 'dm-uuid-LVM-TpcckaZuTFD5gkHuNcp7iF3EMSpgI9UrRGozGpTgwStCtvXggsirr1Ly7MW5iEIG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.801769 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--364a105c--f104--5917--80d0--e8f8560ea5f8-osd--block--364a105c--f104--5917--80d0--e8f8560ea5f8', 'dm-uuid-LVM-KLufK8gEI52UL8f1HkAEnlIB2Iyl14XcNHjIye9KHHf8fqvbtKYFAj5B5hAUzsj0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.801781 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.801792 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.801804 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.801822 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.801833 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.801844 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.801862 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.801873 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.801889 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b', 'scsi-SQEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b-part1', 'scsi-SQEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b-part14', 'scsi-SQEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b-part15', 'scsi-SQEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b-part16', 'scsi-SQEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 21:41:01.801909 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--de74169a--f069--5642--ad17--f2f17c514bb2-osd--block--de74169a--f069--5642--ad17--f2f17c514bb2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BePNm6-V9ka-X1Ve-uLWx-HM3W-H6mq-AnWkOg', 'scsi-0QEMU_QEMU_HARDDISK_13607e9c-06d4-4fec-b04d-15514859d6a0', 'scsi-SQEMU_QEMU_HARDDISK_13607e9c-06d4-4fec-b04d-15514859d6a0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 21:41:01.801929 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--364a105c--f104--5917--80d0--e8f8560ea5f8-osd--block--364a105c--f104--5917--80d0--e8f8560ea5f8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PLLIxm-Y0We-XJu1-LMUL-FZxs-bv50-rRS5Cm', 'scsi-0QEMU_QEMU_HARDDISK_00c7ac73-0c66-4cdd-8f79-353d0386cdac', 'scsi-SQEMU_QEMU_HARDDISK_00c7ac73-0c66-4cdd-8f79-353d0386cdac'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 21:41:01.801945 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f7aa810c-750c-432b-b053-2bc489acb9c9', 'scsi-SQEMU_QEMU_HARDDISK_f7aa810c-750c-432b-b053-2bc489acb9c9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 21:41:01.801957 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-27-20-48-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 21:41:01.801968 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.801979 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.801991 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5f61d8e2--65b7--57ca--8dcb--2a964e525246-osd--block--5f61d8e2--65b7--57ca--8dcb--2a964e525246', 'dm-uuid-LVM-MOQAAAGC1svH5a50BTbOijG6FagohEA30d3qo9pPllDFPkpEmlvOIWdjpqFvdxlS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.802002 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2897d5b9--8afd--5dc0--8795--bd1d3af2960f-osd--block--2897d5b9--8afd--5dc0--8795--bd1d3af2960f', 'dm-uuid-LVM-gOSH3V1rtxklooScdBzcM6WK8O8LWin1AYZPOij2fw1LxMKg8zO1yIAVilLyzUkT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.802098 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.802126 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.802160 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.802182 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.802199 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.802217 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.802228 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.802240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:41:01.802261 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f', 'scsi-SQEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f-part1', 'scsi-SQEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f-part14', 'scsi-SQEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f-part15', 'scsi-SQEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f-part16', 'scsi-SQEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 21:41:01.802298 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--5f61d8e2--65b7--57ca--8dcb--2a964e525246-osd--block--5f61d8e2--65b7--57ca--8dcb--2a964e525246'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wdC30I-L5Vz-aNsq-jnLp-jccK-lSHx-2y24Y9', 'scsi-0QEMU_QEMU_HARDDISK_3ec8be80-0eed-4819-876a-b80c0ef8150e', 'scsi-SQEMU_QEMU_HARDDISK_3ec8be80-0eed-4819-876a-b80c0ef8150e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 21:41:01.802320 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2897d5b9--8afd--5dc0--8795--bd1d3af2960f-osd--block--2897d5b9--8afd--5dc0--8795--bd1d3af2960f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-phfmu8-ETUC-rUWt-fenU-2AXO-cMKJ-jfcXFD', 'scsi-0QEMU_QEMU_HARDDISK_89df2119-9fed-4bd7-9779-2bc26187d4ad', 'scsi-SQEMU_QEMU_HARDDISK_89df2119-9fed-4bd7-9779-2bc26187d4ad'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 21:41:01.802339 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb7d096e-2368-48a2-bece-3fcee17790fa', 'scsi-SQEMU_QEMU_HARDDISK_fb7d096e-2368-48a2-bece-3fcee17790fa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 21:41:01.802360 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-27-20-48-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 21:41:01.802400 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.802430 | orchestrator | 2025-09-27 21:41:01.802449 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-27 21:41:01.802469 | orchestrator | Saturday 27 September 2025 21:30:43 +0000 (0:00:01.323) 0:00:29.532 **** 2025-09-27 21:41:01.802488 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.802509 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.802529 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.802560 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.802582 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.802602 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.802643 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.802663 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.802683 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.802712 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55c19aba-e9c5-4402-abdf-da1cbf841e84', 'scsi-SQEMU_QEMU_HARDDISK_55c19aba-e9c5-4402-abdf-da1cbf841e84'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55c19aba-e9c5-4402-abdf-da1cbf841e84-part1', 'scsi-SQEMU_QEMU_HARDDISK_55c19aba-e9c5-4402-abdf-da1cbf841e84-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55c19aba-e9c5-4402-abdf-da1cbf841e84-part14', 'scsi-SQEMU_QEMU_HARDDISK_55c19aba-e9c5-4402-abdf-da1cbf841e84-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55c19aba-e9c5-4402-abdf-da1cbf841e84-part15', 'scsi-SQEMU_QEMU_HARDDISK_55c19aba-e9c5-4402-abdf-da1cbf841e84-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_55c19aba-e9c5-4402-abdf-da1cbf841e84-part16', 'scsi-SQEMU_QEMU_HARDDISK_55c19aba-e9c5-4402-abdf-da1cbf841e84-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.802754 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.802776 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-27-20-48-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.802796 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.802822 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.802844 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.802864 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.802921 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.802968 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.802988 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803017 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b04e760-262c-4120-878c-1234782e5052', 'scsi-SQEMU_QEMU_HARDDISK_6b04e760-262c-4120-878c-1234782e5052'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b04e760-262c-4120-878c-1234782e5052-part1', 'scsi-SQEMU_QEMU_HARDDISK_6b04e760-262c-4120-878c-1234782e5052-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b04e760-262c-4120-878c-1234782e5052-part14', 'scsi-SQEMU_QEMU_HARDDISK_6b04e760-262c-4120-878c-1234782e5052-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b04e760-262c-4120-878c-1234782e5052-part15', 'scsi-SQEMU_QEMU_HARDDISK_6b04e760-262c-4120-878c-1234782e5052-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b04e760-262c-4120-878c-1234782e5052-part16', 'scsi-SQEMU_QEMU_HARDDISK_6b04e760-262c-4120-878c-1234782e5052-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803040 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-27-20-48-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803229 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803248 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803258 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803275 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803285 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803295 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803317 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803327 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803345 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e37f5674-3a4c-4b27-8c5a-833e99a56bd6', 'scsi-SQEMU_QEMU_HARDDISK_e37f5674-3a4c-4b27-8c5a-833e99a56bd6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e37f5674-3a4c-4b27-8c5a-833e99a56bd6-part1', 'scsi-SQEMU_QEMU_HARDDISK_e37f5674-3a4c-4b27-8c5a-833e99a56bd6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e37f5674-3a4c-4b27-8c5a-833e99a56bd6-part14', 'scsi-SQEMU_QEMU_HARDDISK_e37f5674-3a4c-4b27-8c5a-833e99a56bd6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e37f5674-3a4c-4b27-8c5a-833e99a56bd6-part15', 'scsi-SQEMU_QEMU_HARDDISK_e37f5674-3a4c-4b27-8c5a-833e99a56bd6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e37f5674-3a4c-4b27-8c5a-833e99a56bd6-part16', 'scsi-SQEMU_QEMU_HARDDISK_e37f5674-3a4c-4b27-8c5a-833e99a56bd6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803357 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-27-20-48-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803373 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.803383 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.803399 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c2ef8475--4f12--50de--ab79--c841a7bfbe3d-osd--block--c2ef8475--4f12--50de--ab79--c841a7bfbe3d', 'dm-uuid-LVM-Yghi5PMNzAUKKjcwKKhcMFpFez4MUhPBir7d0NnBE5iYUlseHvYe1FXazX5do9YF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803411 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--de74169a--f069--5642--ad17--f2f17c514bb2-osd--block--de74169a--f069--5642--ad17--f2f17c514bb2', 'dm-uuid-LVM-TpcckaZuTFD5gkHuNcp7iF3EMSpgI9UrRGozGpTgwStCtvXggsirr1Ly7MW5iEIG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803425 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e5968580--5dd1--5a87--a5e5--bc9ba69f72d9-osd--block--e5968580--5dd1--5a87--a5e5--bc9ba69f72d9', 'dm-uuid-LVM-a8TT4Fcz9cVddTCRwzsEcymcLVTFc3bZ8ys5WH9K8T3LrHUjRmzCBXWOjsnEYYz1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803436 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--364a105c--f104--5917--80d0--e8f8560ea5f8-osd--block--364a105c--f104--5917--80d0--e8f8560ea5f8', 'dm-uuid-LVM-KLufK8gEI52UL8f1HkAEnlIB2Iyl14XcNHjIye9KHHf8fqvbtKYFAj5B5hAUzsj0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803446 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803467 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803477 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803487 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803498 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803511 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803522 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803542 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803557 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803568 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803578 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803588 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803608 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b', 'scsi-SQEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b-part1', 'scsi-SQEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b-part14', 'scsi-SQEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b-part15', 'scsi-SQEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b-part16', 'scsi-SQEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803625 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803636 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--de74169a--f069--5642--ad17--f2f17c514bb2-osd--block--de74169a--f069--5642--ad17--f2f17c514bb2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BePNm6-V9ka-X1Ve-uLWx-HM3W-H6mq-AnWkOg', 'scsi-0QEMU_QEMU_HARDDISK_13607e9c-06d4-4fec-b04d-15514859d6a0', 'scsi-SQEMU_QEMU_HARDDISK_13607e9c-06d4-4fec-b04d-15514859d6a0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803650 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803660 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803675 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--364a105c--f104--5917--80d0--e8f8560ea5f8-osd--block--364a105c--f104--5917--80d0--e8f8560ea5f8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PLLIxm-Y0We-XJu1-LMUL-FZxs-bv50-rRS5Cm', 'scsi-0QEMU_QEMU_HARDDISK_00c7ac73-0c66-4cdd-8f79-353d0386cdac', 'scsi-SQEMU_QEMU_HARDDISK_00c7ac73-0c66-4cdd-8f79-353d0386cdac'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803691 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803702 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f7aa810c-750c-432b-b053-2bc489acb9c9', 'scsi-SQEMU_QEMU_HARDDISK_f7aa810c-750c-432b-b053-2bc489acb9c9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803718 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163', 'scsi-SQEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163-part1', 'scsi-SQEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163-part14', 'scsi-SQEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163-part15', 'scsi-SQEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163-part16', 'scsi-SQEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803738 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-27-20-48-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803747 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.803755 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c2ef8475--4f12--50de--ab79--c841a7bfbe3d-osd--block--c2ef8475--4f12--50de--ab79--c841a7bfbe3d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0SFHLR-LyxF-MjbY-BOat-2ikE-xt70-CdNgyn', 'scsi-0QEMU_QEMU_HARDDISK_a92b9860-302a-4dfa-9a5b-f64375177990', 'scsi-SQEMU_QEMU_HARDDISK_a92b9860-302a-4dfa-9a5b-f64375177990'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803767 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--e5968580--5dd1--5a87--a5e5--bc9ba69f72d9-osd--block--e5968580--5dd1--5a87--a5e5--bc9ba69f72d9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iO0jIg-SzmY-BSev-S2q5-gH03-Ib3M-x0DxmD', 'scsi-0QEMU_QEMU_HARDDISK_1d27bfee-58fc-413a-aadf-ce708d3c762a', 'scsi-SQEMU_QEMU_HARDDISK_1d27bfee-58fc-413a-aadf-ce708d3c762a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803776 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5f61d8e2--65b7--57ca--8dcb--2a964e525246-osd--block--5f61d8e2--65b7--57ca--8dcb--2a964e525246', 'dm-uuid-LVM-MOQAAAGC1svH5a50BTbOijG6FagohEA30d3qo9pPllDFPkpEmlvOIWdjpqFvdxlS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803789 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57fc99d7-7aa7-4d8e-bac5-79cb8f64eb7c', 'scsi-SQEMU_QEMU_HARDDISK_57fc99d7-7aa7-4d8e-bac5-79cb8f64eb7c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803802 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2897d5b9--8afd--5dc0--8795--bd1d3af2960f-osd--block--2897d5b9--8afd--5dc0--8795--bd1d3af2960f', 'dm-uuid-LVM-gOSH3V1rtxklooScdBzcM6WK8O8LWin1AYZPOij2fw1LxMKg8zO1yIAVilLyzUkT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803811 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-27-20-48-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803819 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803827 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.803839 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803852 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803860 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803873 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803882 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803890 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803902 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803920 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f', 'scsi-SQEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f-part1', 'scsi-SQEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f-part14', 'scsi-SQEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f-part15', 'scsi-SQEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f-part16', 'scsi-SQEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803930 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--5f61d8e2--65b7--57ca--8dcb--2a964e525246-osd--block--5f61d8e2--65b7--57ca--8dcb--2a964e525246'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wdC30I-L5Vz-aNsq-jnLp-jccK-lSHx-2y24Y9', 'scsi-0QEMU_QEMU_HARDDISK_3ec8be80-0eed-4819-876a-b80c0ef8150e', 'scsi-SQEMU_QEMU_HARDDISK_3ec8be80-0eed-4819-876a-b80c0ef8150e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803942 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2897d5b9--8afd--5dc0--8795--bd1d3af2960f-osd--block--2897d5b9--8afd--5dc0--8795--bd1d3af2960f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-phfmu8-ETUC-rUWt-fenU-2AXO-cMKJ-jfcXFD', 'scsi-0QEMU_QEMU_HARDDISK_89df2119-9fed-4bd7-9779-2bc26187d4ad', 'scsi-SQEMU_QEMU_HARDDISK_89df2119-9fed-4bd7-9779-2bc26187d4ad'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803954 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb7d096e-2368-48a2-bece-3fcee17790fa', 'scsi-SQEMU_QEMU_HARDDISK_fb7d096e-2368-48a2-bece-3fcee17790fa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803963 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-27-20-48-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:41:01.803971 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.803979 | orchestrator | 2025-09-27 21:41:01.803987 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-27 21:41:01.803995 | orchestrator | Saturday 27 September 2025 21:30:44 +0000 (0:00:00.997) 0:00:30.529 **** 2025-09-27 21:41:01.804003 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.804012 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.804020 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.804031 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.804040 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.804069 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.804078 | orchestrator | 2025-09-27 21:41:01.804086 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-27 21:41:01.804094 | orchestrator | Saturday 27 September 2025 21:30:45 +0000 (0:00:01.265) 0:00:31.794 **** 2025-09-27 21:41:01.804102 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.804110 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.804117 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.804125 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.804133 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.804141 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.804148 | orchestrator | 2025-09-27 21:41:01.804156 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-27 21:41:01.804164 | orchestrator | Saturday 27 September 2025 21:30:46 +0000 (0:00:00.533) 0:00:32.328 **** 2025-09-27 21:41:01.804172 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.804180 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.804188 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.804196 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.804204 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.804211 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.804219 | orchestrator | 2025-09-27 21:41:01.804227 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-27 21:41:01.804242 | orchestrator | Saturday 27 September 2025 21:30:46 +0000 (0:00:00.825) 0:00:33.153 **** 2025-09-27 21:41:01.804250 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.804258 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.804266 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.804273 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.804281 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.804289 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.804297 | orchestrator | 2025-09-27 21:41:01.804304 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-27 21:41:01.804312 | orchestrator | Saturday 27 September 2025 21:30:47 +0000 (0:00:00.545) 0:00:33.698 **** 2025-09-27 21:41:01.804320 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.804328 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.804336 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.804343 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.804351 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.804359 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.804366 | orchestrator | 2025-09-27 21:41:01.804374 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-27 21:41:01.804382 | orchestrator | Saturday 27 September 2025 21:30:48 +0000 (0:00:00.718) 0:00:34.416 **** 2025-09-27 21:41:01.804390 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.804398 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.804406 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.804414 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.804421 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.804429 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.804437 | orchestrator | 2025-09-27 21:41:01.804445 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-27 21:41:01.804456 | orchestrator | Saturday 27 September 2025 21:30:48 +0000 (0:00:00.707) 0:00:35.124 **** 2025-09-27 21:41:01.804464 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-27 21:41:01.804473 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-09-27 21:41:01.804480 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-09-27 21:41:01.804488 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-09-27 21:41:01.804496 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-27 21:41:01.804504 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-27 21:41:01.804512 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-09-27 21:41:01.804519 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-27 21:41:01.804527 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-09-27 21:41:01.804535 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-09-27 21:41:01.804543 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-27 21:41:01.804550 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-27 21:41:01.804558 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-27 21:41:01.804566 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-27 21:41:01.804573 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-27 21:41:01.804581 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-27 21:41:01.804589 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-27 21:41:01.804597 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-27 21:41:01.804605 | orchestrator | 2025-09-27 21:41:01.804612 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-27 21:41:01.804620 | orchestrator | Saturday 27 September 2025 21:30:52 +0000 (0:00:03.758) 0:00:38.882 **** 2025-09-27 21:41:01.804628 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-27 21:41:01.804636 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-27 21:41:01.804649 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-27 21:41:01.804657 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.804664 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-27 21:41:01.804672 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-27 21:41:01.804680 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-27 21:41:01.804688 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.804696 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-27 21:41:01.804703 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-27 21:41:01.804711 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-27 21:41:01.804719 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-27 21:41:01.804868 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-27 21:41:01.804881 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-27 21:41:01.804890 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.804898 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-27 21:41:01.804905 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-27 21:41:01.804913 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-27 21:41:01.804921 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.804929 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.804937 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-27 21:41:01.804945 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-27 21:41:01.804952 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-27 21:41:01.804960 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.804968 | orchestrator | 2025-09-27 21:41:01.804976 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-27 21:41:01.804984 | orchestrator | Saturday 27 September 2025 21:30:53 +0000 (0:00:00.547) 0:00:39.430 **** 2025-09-27 21:41:01.804992 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.805000 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.805008 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.805016 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:01.805024 | orchestrator | 2025-09-27 21:41:01.805032 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-27 21:41:01.805040 | orchestrator | Saturday 27 September 2025 21:30:54 +0000 (0:00:00.950) 0:00:40.381 **** 2025-09-27 21:41:01.805062 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.805070 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.805078 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.805086 | orchestrator | 2025-09-27 21:41:01.805093 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-27 21:41:01.805101 | orchestrator | Saturday 27 September 2025 21:30:54 +0000 (0:00:00.420) 0:00:40.802 **** 2025-09-27 21:41:01.805109 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.805117 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.805125 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.805133 | orchestrator | 2025-09-27 21:41:01.805141 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-27 21:41:01.805148 | orchestrator | Saturday 27 September 2025 21:30:54 +0000 (0:00:00.328) 0:00:41.131 **** 2025-09-27 21:41:01.805156 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.805164 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.805172 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.805180 | orchestrator | 2025-09-27 21:41:01.805187 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-27 21:41:01.805200 | orchestrator | Saturday 27 September 2025 21:30:55 +0000 (0:00:00.312) 0:00:41.443 **** 2025-09-27 21:41:01.805214 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.805222 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.805230 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.805237 | orchestrator | 2025-09-27 21:41:01.805245 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-27 21:41:01.805253 | orchestrator | Saturday 27 September 2025 21:30:55 +0000 (0:00:00.716) 0:00:42.160 **** 2025-09-27 21:41:01.805261 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-27 21:41:01.805269 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-27 21:41:01.805277 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-27 21:41:01.805285 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.805293 | orchestrator | 2025-09-27 21:41:01.805300 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-27 21:41:01.805308 | orchestrator | Saturday 27 September 2025 21:30:56 +0000 (0:00:00.366) 0:00:42.526 **** 2025-09-27 21:41:01.805316 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-27 21:41:01.805324 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-27 21:41:01.805332 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-27 21:41:01.805340 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.805348 | orchestrator | 2025-09-27 21:41:01.805356 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-27 21:41:01.805364 | orchestrator | Saturday 27 September 2025 21:30:56 +0000 (0:00:00.408) 0:00:42.935 **** 2025-09-27 21:41:01.805371 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-27 21:41:01.805379 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-27 21:41:01.805387 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-27 21:41:01.805395 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.805402 | orchestrator | 2025-09-27 21:41:01.805410 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-27 21:41:01.805418 | orchestrator | Saturday 27 September 2025 21:30:57 +0000 (0:00:00.659) 0:00:43.594 **** 2025-09-27 21:41:01.805426 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.805434 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.805442 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.805449 | orchestrator | 2025-09-27 21:41:01.805458 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-27 21:41:01.805467 | orchestrator | Saturday 27 September 2025 21:30:57 +0000 (0:00:00.361) 0:00:43.956 **** 2025-09-27 21:41:01.805476 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-27 21:41:01.805485 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-27 21:41:01.805494 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-27 21:41:01.805503 | orchestrator | 2025-09-27 21:41:01.805512 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-27 21:41:01.805521 | orchestrator | Saturday 27 September 2025 21:30:58 +0000 (0:00:01.015) 0:00:44.971 **** 2025-09-27 21:41:01.805555 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-27 21:41:01.805565 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-27 21:41:01.805575 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-27 21:41:01.805583 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-09-27 21:41:01.805592 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-27 21:41:01.805601 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-27 21:41:01.805611 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-27 21:41:01.805620 | orchestrator | 2025-09-27 21:41:01.805629 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-27 21:41:01.805644 | orchestrator | Saturday 27 September 2025 21:31:00 +0000 (0:00:01.687) 0:00:46.659 **** 2025-09-27 21:41:01.805653 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-27 21:41:01.805661 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-27 21:41:01.805669 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-27 21:41:01.805677 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-09-27 21:41:01.805685 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-27 21:41:01.805693 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-27 21:41:01.805700 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-27 21:41:01.805708 | orchestrator | 2025-09-27 21:41:01.805716 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-27 21:41:01.805724 | orchestrator | Saturday 27 September 2025 21:31:02 +0000 (0:00:02.328) 0:00:48.988 **** 2025-09-27 21:41:01.805732 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:01.805741 | orchestrator | 2025-09-27 21:41:01.805749 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-27 21:41:01.805757 | orchestrator | Saturday 27 September 2025 21:31:04 +0000 (0:00:01.835) 0:00:50.823 **** 2025-09-27 21:41:01.805765 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:01.805773 | orchestrator | 2025-09-27 21:41:01.805784 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-27 21:41:01.805793 | orchestrator | Saturday 27 September 2025 21:31:05 +0000 (0:00:01.347) 0:00:52.171 **** 2025-09-27 21:41:01.805801 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.805809 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.805817 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.805825 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.805833 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.805841 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.805849 | orchestrator | 2025-09-27 21:41:01.805856 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-27 21:41:01.805864 | orchestrator | Saturday 27 September 2025 21:31:07 +0000 (0:00:01.243) 0:00:53.415 **** 2025-09-27 21:41:01.805872 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.805880 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.805888 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.805896 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.805904 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.805912 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.805920 | orchestrator | 2025-09-27 21:41:01.805928 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-27 21:41:01.805936 | orchestrator | Saturday 27 September 2025 21:31:08 +0000 (0:00:01.622) 0:00:55.038 **** 2025-09-27 21:41:01.805943 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.805951 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.805959 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.805967 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.805975 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.805983 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.805991 | orchestrator | 2025-09-27 21:41:01.805999 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-27 21:41:01.806007 | orchestrator | Saturday 27 September 2025 21:31:09 +0000 (0:00:01.114) 0:00:56.153 **** 2025-09-27 21:41:01.806037 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.806074 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.806082 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.806090 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.806098 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.806106 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.806114 | orchestrator | 2025-09-27 21:41:01.806122 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-27 21:41:01.806130 | orchestrator | Saturday 27 September 2025 21:31:11 +0000 (0:00:01.351) 0:00:57.504 **** 2025-09-27 21:41:01.806138 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.806146 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.806154 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.806162 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.806170 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.806178 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.806185 | orchestrator | 2025-09-27 21:41:01.806194 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-27 21:41:01.806202 | orchestrator | Saturday 27 September 2025 21:31:11 +0000 (0:00:00.663) 0:00:58.168 **** 2025-09-27 21:41:01.806236 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.806245 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.806253 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.806261 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.806269 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.806277 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.806285 | orchestrator | 2025-09-27 21:41:01.806293 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-27 21:41:01.806301 | orchestrator | Saturday 27 September 2025 21:31:12 +0000 (0:00:00.654) 0:00:58.823 **** 2025-09-27 21:41:01.806309 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.806317 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.806325 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.806333 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.806340 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.806348 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.806356 | orchestrator | 2025-09-27 21:41:01.806364 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-27 21:41:01.806372 | orchestrator | Saturday 27 September 2025 21:31:13 +0000 (0:00:00.520) 0:00:59.344 **** 2025-09-27 21:41:01.806380 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.806388 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.806396 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.806404 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.806412 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.806419 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.806427 | orchestrator | 2025-09-27 21:41:01.806435 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-27 21:41:01.806443 | orchestrator | Saturday 27 September 2025 21:31:14 +0000 (0:00:01.354) 0:01:00.699 **** 2025-09-27 21:41:01.806451 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.806459 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.806466 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.806474 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.806482 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.806490 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.806498 | orchestrator | 2025-09-27 21:41:01.806506 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-27 21:41:01.806514 | orchestrator | Saturday 27 September 2025 21:31:15 +0000 (0:00:01.165) 0:01:01.864 **** 2025-09-27 21:41:01.806522 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.806530 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.806538 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.806546 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.806554 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.806562 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.806574 | orchestrator | 2025-09-27 21:41:01.806583 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-27 21:41:01.806591 | orchestrator | Saturday 27 September 2025 21:31:16 +0000 (0:00:00.763) 0:01:02.628 **** 2025-09-27 21:41:01.806598 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.806606 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.806614 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.806622 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.806630 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.806638 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.806646 | orchestrator | 2025-09-27 21:41:01.806657 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-27 21:41:01.806666 | orchestrator | Saturday 27 September 2025 21:31:17 +0000 (0:00:00.607) 0:01:03.236 **** 2025-09-27 21:41:01.806674 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.806682 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.806690 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.806698 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.806706 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.806713 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.806721 | orchestrator | 2025-09-27 21:41:01.806729 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-27 21:41:01.806737 | orchestrator | Saturday 27 September 2025 21:31:17 +0000 (0:00:00.644) 0:01:03.880 **** 2025-09-27 21:41:01.806745 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.806753 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.806761 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.806769 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.806777 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.806784 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.806792 | orchestrator | 2025-09-27 21:41:01.806800 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-27 21:41:01.806808 | orchestrator | Saturday 27 September 2025 21:31:18 +0000 (0:00:00.513) 0:01:04.393 **** 2025-09-27 21:41:01.806816 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.806824 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.806832 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.806840 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.806848 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.806855 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.806863 | orchestrator | 2025-09-27 21:41:01.806871 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-27 21:41:01.806879 | orchestrator | Saturday 27 September 2025 21:31:18 +0000 (0:00:00.733) 0:01:05.126 **** 2025-09-27 21:41:01.806887 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.806895 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.806903 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.806911 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.806919 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.806927 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.806935 | orchestrator | 2025-09-27 21:41:01.806943 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-27 21:41:01.806951 | orchestrator | Saturday 27 September 2025 21:31:19 +0000 (0:00:00.580) 0:01:05.707 **** 2025-09-27 21:41:01.806959 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.806967 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.806974 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.806982 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.806990 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.806998 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.807006 | orchestrator | 2025-09-27 21:41:01.807014 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-27 21:41:01.807044 | orchestrator | Saturday 27 September 2025 21:31:20 +0000 (0:00:00.836) 0:01:06.543 **** 2025-09-27 21:41:01.807094 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.807103 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.807110 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.807118 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.807126 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.807134 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.807142 | orchestrator | 2025-09-27 21:41:01.807150 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-27 21:41:01.807158 | orchestrator | Saturday 27 September 2025 21:31:20 +0000 (0:00:00.489) 0:01:07.033 **** 2025-09-27 21:41:01.807166 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.807173 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.807181 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.807189 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.807197 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.807204 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.807212 | orchestrator | 2025-09-27 21:41:01.807220 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-27 21:41:01.807228 | orchestrator | Saturday 27 September 2025 21:31:21 +0000 (0:00:00.630) 0:01:07.664 **** 2025-09-27 21:41:01.807236 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.807244 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.807251 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.807259 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.807267 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.807275 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.807282 | orchestrator | 2025-09-27 21:41:01.807290 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-09-27 21:41:01.807298 | orchestrator | Saturday 27 September 2025 21:31:22 +0000 (0:00:01.031) 0:01:08.695 **** 2025-09-27 21:41:01.807306 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:01.807314 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:01.807322 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:01.807329 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:01.807337 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:01.807345 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:01.807353 | orchestrator | 2025-09-27 21:41:01.807360 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-09-27 21:41:01.807368 | orchestrator | Saturday 27 September 2025 21:31:23 +0000 (0:00:01.422) 0:01:10.118 **** 2025-09-27 21:41:01.807376 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:01.807384 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:01.807392 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:01.807399 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:01.807407 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:01.807415 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:01.807423 | orchestrator | 2025-09-27 21:41:01.807431 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-09-27 21:41:01.807438 | orchestrator | Saturday 27 September 2025 21:31:25 +0000 (0:00:01.960) 0:01:12.078 **** 2025-09-27 21:41:01.807447 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:01.807455 | orchestrator | 2025-09-27 21:41:01.807467 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-09-27 21:41:01.807475 | orchestrator | Saturday 27 September 2025 21:31:26 +0000 (0:00:00.925) 0:01:13.004 **** 2025-09-27 21:41:01.807483 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.807490 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.807498 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.807506 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.807514 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.807521 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.807529 | orchestrator | 2025-09-27 21:41:01.807537 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-09-27 21:41:01.807551 | orchestrator | Saturday 27 September 2025 21:31:27 +0000 (0:00:00.509) 0:01:13.513 **** 2025-09-27 21:41:01.807559 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.807567 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.807575 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.807583 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.807590 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.807598 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.807606 | orchestrator | 2025-09-27 21:41:01.807614 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-09-27 21:41:01.807622 | orchestrator | Saturday 27 September 2025 21:31:27 +0000 (0:00:00.608) 0:01:14.122 **** 2025-09-27 21:41:01.807630 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-27 21:41:01.807638 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-27 21:41:01.807645 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-27 21:41:01.807653 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-27 21:41:01.807661 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-27 21:41:01.807669 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-27 21:41:01.807677 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-27 21:41:01.807685 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-27 21:41:01.807693 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-27 21:41:01.807700 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-27 21:41:01.807708 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-27 21:41:01.807716 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-27 21:41:01.807724 | orchestrator | 2025-09-27 21:41:01.807757 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-09-27 21:41:01.807766 | orchestrator | Saturday 27 September 2025 21:31:29 +0000 (0:00:01.269) 0:01:15.392 **** 2025-09-27 21:41:01.807775 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:01.807782 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:01.807790 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:01.807798 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:01.807806 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:01.807814 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:01.807822 | orchestrator | 2025-09-27 21:41:01.807830 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-09-27 21:41:01.807837 | orchestrator | Saturday 27 September 2025 21:31:30 +0000 (0:00:01.275) 0:01:16.667 **** 2025-09-27 21:41:01.807845 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.807853 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.807861 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.807869 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.807877 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.807884 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.807892 | orchestrator | 2025-09-27 21:41:01.807900 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-09-27 21:41:01.807908 | orchestrator | Saturday 27 September 2025 21:31:31 +0000 (0:00:00.597) 0:01:17.265 **** 2025-09-27 21:41:01.807916 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.807924 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.807931 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.807939 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.807947 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.807960 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.807968 | orchestrator | 2025-09-27 21:41:01.807976 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-09-27 21:41:01.807984 | orchestrator | Saturday 27 September 2025 21:31:31 +0000 (0:00:00.720) 0:01:17.985 **** 2025-09-27 21:41:01.807992 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.807999 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.808007 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.808015 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.808023 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.808030 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.808038 | orchestrator | 2025-09-27 21:41:01.808046 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-09-27 21:41:01.808067 | orchestrator | Saturday 27 September 2025 21:31:32 +0000 (0:00:00.605) 0:01:18.591 **** 2025-09-27 21:41:01.808075 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:01.808083 | orchestrator | 2025-09-27 21:41:01.808091 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-09-27 21:41:01.808099 | orchestrator | Saturday 27 September 2025 21:31:33 +0000 (0:00:01.135) 0:01:19.727 **** 2025-09-27 21:41:01.808107 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.808115 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.808123 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.808130 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.808143 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.808156 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.808171 | orchestrator | 2025-09-27 21:41:01.808191 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-09-27 21:41:01.808206 | orchestrator | Saturday 27 September 2025 21:32:18 +0000 (0:00:45.449) 0:02:05.176 **** 2025-09-27 21:41:01.808218 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-27 21:41:01.808231 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-27 21:41:01.808244 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-27 21:41:01.808257 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.808269 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-27 21:41:01.808281 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-27 21:41:01.808294 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-27 21:41:01.808307 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.808320 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-27 21:41:01.808334 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-27 21:41:01.808347 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-27 21:41:01.808360 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.808369 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-27 21:41:01.808377 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-27 21:41:01.808384 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-27 21:41:01.808392 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.808400 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-27 21:41:01.808408 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-27 21:41:01.808416 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-27 21:41:01.808424 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.808439 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-27 21:41:01.808447 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-27 21:41:01.808455 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-27 21:41:01.808494 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.808503 | orchestrator | 2025-09-27 21:41:01.808511 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-09-27 21:41:01.808519 | orchestrator | Saturday 27 September 2025 21:32:19 +0000 (0:00:00.610) 0:02:05.786 **** 2025-09-27 21:41:01.808527 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.808535 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.808543 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.808550 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.808558 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.808566 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.808573 | orchestrator | 2025-09-27 21:41:01.808581 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-09-27 21:41:01.808589 | orchestrator | Saturday 27 September 2025 21:32:20 +0000 (0:00:00.536) 0:02:06.323 **** 2025-09-27 21:41:01.808597 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.808605 | orchestrator | 2025-09-27 21:41:01.808613 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-09-27 21:41:01.808621 | orchestrator | Saturday 27 September 2025 21:32:20 +0000 (0:00:00.351) 0:02:06.674 **** 2025-09-27 21:41:01.808629 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.808637 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.808644 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.808652 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.808660 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.808668 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.808675 | orchestrator | 2025-09-27 21:41:01.808683 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-09-27 21:41:01.808691 | orchestrator | Saturday 27 September 2025 21:32:21 +0000 (0:00:00.583) 0:02:07.258 **** 2025-09-27 21:41:01.808699 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.808707 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.808715 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.808722 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.808732 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.808741 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.808751 | orchestrator | 2025-09-27 21:41:01.808761 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-09-27 21:41:01.808770 | orchestrator | Saturday 27 September 2025 21:32:21 +0000 (0:00:00.758) 0:02:08.016 **** 2025-09-27 21:41:01.808780 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.808789 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.808799 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.808842 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.808853 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.808862 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.808872 | orchestrator | 2025-09-27 21:41:01.808881 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-09-27 21:41:01.808891 | orchestrator | Saturday 27 September 2025 21:32:22 +0000 (0:00:00.580) 0:02:08.597 **** 2025-09-27 21:41:01.808900 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.808910 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.808919 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.808929 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.808943 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.808953 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.808962 | orchestrator | 2025-09-27 21:41:01.808972 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-09-27 21:41:01.808987 | orchestrator | Saturday 27 September 2025 21:32:24 +0000 (0:00:02.302) 0:02:10.899 **** 2025-09-27 21:41:01.808997 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.809007 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.809016 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.809025 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.809035 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.809044 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.809102 | orchestrator | 2025-09-27 21:41:01.809112 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-09-27 21:41:01.809122 | orchestrator | Saturday 27 September 2025 21:32:25 +0000 (0:00:00.604) 0:02:11.504 **** 2025-09-27 21:41:01.809132 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:01.809143 | orchestrator | 2025-09-27 21:41:01.809152 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-09-27 21:41:01.809162 | orchestrator | Saturday 27 September 2025 21:32:26 +0000 (0:00:01.273) 0:02:12.777 **** 2025-09-27 21:41:01.809172 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.809181 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.809191 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.809201 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.809210 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.809219 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.809229 | orchestrator | 2025-09-27 21:41:01.809238 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-09-27 21:41:01.809248 | orchestrator | Saturday 27 September 2025 21:32:27 +0000 (0:00:00.669) 0:02:13.446 **** 2025-09-27 21:41:01.809258 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.809267 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.809277 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.809286 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.809296 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.809305 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.809315 | orchestrator | 2025-09-27 21:41:01.809324 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-09-27 21:41:01.809334 | orchestrator | Saturday 27 September 2025 21:32:28 +0000 (0:00:00.937) 0:02:14.384 **** 2025-09-27 21:41:01.809344 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.809353 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.809363 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.809372 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.809382 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.809392 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.809401 | orchestrator | 2025-09-27 21:41:01.809411 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-09-27 21:41:01.809453 | orchestrator | Saturday 27 September 2025 21:32:28 +0000 (0:00:00.624) 0:02:15.009 **** 2025-09-27 21:41:01.809464 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.809474 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.809484 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.809493 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.809502 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.809512 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.809522 | orchestrator | 2025-09-27 21:41:01.809531 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-09-27 21:41:01.809541 | orchestrator | Saturday 27 September 2025 21:32:29 +0000 (0:00:00.859) 0:02:15.869 **** 2025-09-27 21:41:01.809550 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.809560 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.809570 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.809579 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.809588 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.809604 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.809614 | orchestrator | 2025-09-27 21:41:01.809622 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-09-27 21:41:01.809630 | orchestrator | Saturday 27 September 2025 21:32:30 +0000 (0:00:00.623) 0:02:16.493 **** 2025-09-27 21:41:01.809638 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.809646 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.809654 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.809661 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.809669 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.809677 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.809685 | orchestrator | 2025-09-27 21:41:01.809693 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-09-27 21:41:01.809701 | orchestrator | Saturday 27 September 2025 21:32:30 +0000 (0:00:00.621) 0:02:17.114 **** 2025-09-27 21:41:01.809709 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.809716 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.809724 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.809732 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.809740 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.809748 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.809755 | orchestrator | 2025-09-27 21:41:01.809763 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-09-27 21:41:01.809771 | orchestrator | Saturday 27 September 2025 21:32:31 +0000 (0:00:00.583) 0:02:17.698 **** 2025-09-27 21:41:01.809779 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.809787 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.809794 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.809802 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.809810 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.809818 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.809826 | orchestrator | 2025-09-27 21:41:01.809834 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-09-27 21:41:01.809842 | orchestrator | Saturday 27 September 2025 21:32:32 +0000 (0:00:00.647) 0:02:18.345 **** 2025-09-27 21:41:01.809850 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.809861 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.809869 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.809877 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.809885 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.809893 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.809901 | orchestrator | 2025-09-27 21:41:01.809909 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-09-27 21:41:01.809917 | orchestrator | Saturday 27 September 2025 21:32:33 +0000 (0:00:00.864) 0:02:19.210 **** 2025-09-27 21:41:01.809925 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:01.809933 | orchestrator | 2025-09-27 21:41:01.809941 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-09-27 21:41:01.809949 | orchestrator | Saturday 27 September 2025 21:32:33 +0000 (0:00:00.935) 0:02:20.145 **** 2025-09-27 21:41:01.809957 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-09-27 21:41:01.809965 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-09-27 21:41:01.809973 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-09-27 21:41:01.809980 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-09-27 21:41:01.809988 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-09-27 21:41:01.809996 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-09-27 21:41:01.810004 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-09-27 21:41:01.810012 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-09-27 21:41:01.810043 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-09-27 21:41:01.810069 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-09-27 21:41:01.810078 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-09-27 21:41:01.810086 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-09-27 21:41:01.810093 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-09-27 21:41:01.810101 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-09-27 21:41:01.810109 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-09-27 21:41:01.810117 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-09-27 21:41:01.810125 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-09-27 21:41:01.810133 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-09-27 21:41:01.810141 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-09-27 21:41:01.810149 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-09-27 21:41:01.810157 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-09-27 21:41:01.810189 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-09-27 21:41:01.810199 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-09-27 21:41:01.810207 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-09-27 21:41:01.810215 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-09-27 21:41:01.810222 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-09-27 21:41:01.810230 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-09-27 21:41:01.810238 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-09-27 21:41:01.810246 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-09-27 21:41:01.810254 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-09-27 21:41:01.810262 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-09-27 21:41:01.810269 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-09-27 21:41:01.810277 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-09-27 21:41:01.810285 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-09-27 21:41:01.810293 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-09-27 21:41:01.810301 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-09-27 21:41:01.810309 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-09-27 21:41:01.810317 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-09-27 21:41:01.810324 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-09-27 21:41:01.810332 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-09-27 21:41:01.810340 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-09-27 21:41:01.810348 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-09-27 21:41:01.810356 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-09-27 21:41:01.810363 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-09-27 21:41:01.810371 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-09-27 21:41:01.810379 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-09-27 21:41:01.810387 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-09-27 21:41:01.810395 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-27 21:41:01.810402 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-09-27 21:41:01.810410 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-27 21:41:01.810418 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-27 21:41:01.810426 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-27 21:41:01.810443 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-27 21:41:01.810451 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-27 21:41:01.810459 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-27 21:41:01.810467 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-27 21:41:01.810474 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-27 21:41:01.810482 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-27 21:41:01.810490 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-27 21:41:01.810498 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-27 21:41:01.810506 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-27 21:41:01.810513 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-27 21:41:01.810521 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-27 21:41:01.810529 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-27 21:41:01.810537 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-27 21:41:01.810545 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-27 21:41:01.810553 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-27 21:41:01.810560 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-27 21:41:01.810568 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-27 21:41:01.810576 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-27 21:41:01.810584 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-27 21:41:01.810592 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-27 21:41:01.810600 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-27 21:41:01.810607 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-27 21:41:01.810615 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-27 21:41:01.810623 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-27 21:41:01.810631 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-27 21:41:01.810639 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-27 21:41:01.810646 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-27 21:41:01.810675 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-27 21:41:01.810684 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-27 21:41:01.810692 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-27 21:41:01.810700 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-09-27 21:41:01.810708 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-27 21:41:01.810716 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-09-27 21:41:01.810724 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-09-27 21:41:01.810732 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-09-27 21:41:01.810740 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-27 21:41:01.810748 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-09-27 21:41:01.810756 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-09-27 21:41:01.810764 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-09-27 21:41:01.810779 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-09-27 21:41:01.810787 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-09-27 21:41:01.810795 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-09-27 21:41:01.810803 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-09-27 21:41:01.810810 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-09-27 21:41:01.810818 | orchestrator | 2025-09-27 21:41:01.810826 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-09-27 21:41:01.810834 | orchestrator | Saturday 27 September 2025 21:32:40 +0000 (0:00:06.613) 0:02:26.759 **** 2025-09-27 21:41:01.810842 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.810849 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.810857 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.810865 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:01.810873 | orchestrator | 2025-09-27 21:41:01.810881 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-09-27 21:41:01.810889 | orchestrator | Saturday 27 September 2025 21:32:41 +0000 (0:00:00.822) 0:02:27.582 **** 2025-09-27 21:41:01.810897 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-27 21:41:01.810905 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-27 21:41:01.810917 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-27 21:41:01.810925 | orchestrator | 2025-09-27 21:41:01.810933 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-09-27 21:41:01.810941 | orchestrator | Saturday 27 September 2025 21:32:42 +0000 (0:00:00.652) 0:02:28.234 **** 2025-09-27 21:41:01.810949 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-27 21:41:01.810957 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-27 21:41:01.810965 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-27 21:41:01.810973 | orchestrator | 2025-09-27 21:41:01.810981 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-09-27 21:41:01.810989 | orchestrator | Saturday 27 September 2025 21:32:43 +0000 (0:00:01.282) 0:02:29.517 **** 2025-09-27 21:41:01.810997 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.811005 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.811013 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.811020 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.811028 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.811036 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.811044 | orchestrator | 2025-09-27 21:41:01.811064 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-09-27 21:41:01.811072 | orchestrator | Saturday 27 September 2025 21:32:43 +0000 (0:00:00.556) 0:02:30.073 **** 2025-09-27 21:41:01.811080 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.811088 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.811096 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.811104 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.811112 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.811120 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.811127 | orchestrator | 2025-09-27 21:41:01.811135 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-09-27 21:41:01.811143 | orchestrator | Saturday 27 September 2025 21:32:44 +0000 (0:00:00.526) 0:02:30.600 **** 2025-09-27 21:41:01.811156 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.811164 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.811172 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.811180 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.811188 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.811196 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.811204 | orchestrator | 2025-09-27 21:41:01.811212 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-09-27 21:41:01.811220 | orchestrator | Saturday 27 September 2025 21:32:45 +0000 (0:00:00.698) 0:02:31.298 **** 2025-09-27 21:41:01.811228 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.811236 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.811266 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.811275 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.811283 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.811290 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.811298 | orchestrator | 2025-09-27 21:41:01.811306 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-09-27 21:41:01.811314 | orchestrator | Saturday 27 September 2025 21:32:45 +0000 (0:00:00.548) 0:02:31.847 **** 2025-09-27 21:41:01.811322 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.811330 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.811338 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.811346 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.811353 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.811361 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.811369 | orchestrator | 2025-09-27 21:41:01.811377 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-09-27 21:41:01.811385 | orchestrator | Saturday 27 September 2025 21:32:46 +0000 (0:00:00.699) 0:02:32.547 **** 2025-09-27 21:41:01.811393 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.811400 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.811408 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.811416 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.811424 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.811432 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.811439 | orchestrator | 2025-09-27 21:41:01.811447 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-09-27 21:41:01.811455 | orchestrator | Saturday 27 September 2025 21:32:46 +0000 (0:00:00.528) 0:02:33.076 **** 2025-09-27 21:41:01.811463 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.811471 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.811479 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.811486 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.811494 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.811502 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.811510 | orchestrator | 2025-09-27 21:41:01.811518 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-09-27 21:41:01.811526 | orchestrator | Saturday 27 September 2025 21:32:47 +0000 (0:00:00.684) 0:02:33.761 **** 2025-09-27 21:41:01.811534 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.811541 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.811549 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.811557 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.811565 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.811572 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.811580 | orchestrator | 2025-09-27 21:41:01.811588 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-09-27 21:41:01.811596 | orchestrator | Saturday 27 September 2025 21:32:48 +0000 (0:00:00.473) 0:02:34.234 **** 2025-09-27 21:41:01.811604 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.811612 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.811626 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.811634 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.811645 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.811653 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.811661 | orchestrator | 2025-09-27 21:41:01.811669 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-09-27 21:41:01.811677 | orchestrator | Saturday 27 September 2025 21:32:51 +0000 (0:00:03.126) 0:02:37.360 **** 2025-09-27 21:41:01.811685 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.811693 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.811700 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.811708 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.811716 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.811724 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.811732 | orchestrator | 2025-09-27 21:41:01.811740 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-09-27 21:41:01.811748 | orchestrator | Saturday 27 September 2025 21:32:51 +0000 (0:00:00.598) 0:02:37.959 **** 2025-09-27 21:41:01.811755 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.811763 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.811771 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.811784 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.811797 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.811808 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.811829 | orchestrator | 2025-09-27 21:41:01.811843 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-09-27 21:41:01.811856 | orchestrator | Saturday 27 September 2025 21:32:52 +0000 (0:00:00.776) 0:02:38.735 **** 2025-09-27 21:41:01.811870 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.811883 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.811896 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.811910 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.811923 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.811931 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.811939 | orchestrator | 2025-09-27 21:41:01.811947 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-09-27 21:41:01.811954 | orchestrator | Saturday 27 September 2025 21:32:53 +0000 (0:00:00.494) 0:02:39.230 **** 2025-09-27 21:41:01.811962 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.811970 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.811978 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.811986 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-27 21:41:01.811994 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-27 21:41:01.812002 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-27 21:41:01.812010 | orchestrator | 2025-09-27 21:41:01.812018 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-09-27 21:41:01.812102 | orchestrator | Saturday 27 September 2025 21:32:53 +0000 (0:00:00.905) 0:02:40.136 **** 2025-09-27 21:41:01.812115 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.812123 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.812130 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.812139 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-09-27 21:41:01.812149 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-09-27 21:41:01.812166 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-09-27 21:41:01.812174 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-09-27 21:41:01.812182 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.812190 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.812198 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-09-27 21:41:01.812211 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-09-27 21:41:01.812219 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.812227 | orchestrator | 2025-09-27 21:41:01.812235 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-09-27 21:41:01.812243 | orchestrator | Saturday 27 September 2025 21:32:54 +0000 (0:00:00.672) 0:02:40.808 **** 2025-09-27 21:41:01.812251 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.812258 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.812266 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.812274 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.812282 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.812289 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.812297 | orchestrator | 2025-09-27 21:41:01.812305 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-09-27 21:41:01.812313 | orchestrator | Saturday 27 September 2025 21:32:55 +0000 (0:00:00.783) 0:02:41.591 **** 2025-09-27 21:41:01.812320 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.812328 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.812336 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.812342 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.812349 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.812355 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.812362 | orchestrator | 2025-09-27 21:41:01.812368 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-27 21:41:01.812375 | orchestrator | Saturday 27 September 2025 21:32:56 +0000 (0:00:00.618) 0:02:42.210 **** 2025-09-27 21:41:01.812382 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.812388 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.812395 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.812401 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.812408 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.812414 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.812421 | orchestrator | 2025-09-27 21:41:01.812428 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-27 21:41:01.812434 | orchestrator | Saturday 27 September 2025 21:32:56 +0000 (0:00:00.755) 0:02:42.965 **** 2025-09-27 21:41:01.812441 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.812455 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.812462 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.812469 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.812475 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.812482 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.812488 | orchestrator | 2025-09-27 21:41:01.812495 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-27 21:41:01.812501 | orchestrator | Saturday 27 September 2025 21:32:57 +0000 (0:00:00.760) 0:02:43.726 **** 2025-09-27 21:41:01.812508 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.812515 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.812521 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.812548 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.812556 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.812563 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.812569 | orchestrator | 2025-09-27 21:41:01.812576 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-27 21:41:01.812583 | orchestrator | Saturday 27 September 2025 21:32:58 +0000 (0:00:00.973) 0:02:44.700 **** 2025-09-27 21:41:01.812589 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.812596 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.812603 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.812609 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.812616 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.812623 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.812629 | orchestrator | 2025-09-27 21:41:01.812636 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-27 21:41:01.812643 | orchestrator | Saturday 27 September 2025 21:32:59 +0000 (0:00:01.259) 0:02:45.959 **** 2025-09-27 21:41:01.812649 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-27 21:41:01.812656 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-27 21:41:01.812663 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-27 21:41:01.812669 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.812676 | orchestrator | 2025-09-27 21:41:01.812682 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-27 21:41:01.812689 | orchestrator | Saturday 27 September 2025 21:33:00 +0000 (0:00:00.643) 0:02:46.603 **** 2025-09-27 21:41:01.812696 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-27 21:41:01.812702 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-27 21:41:01.812709 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-27 21:41:01.812715 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.812722 | orchestrator | 2025-09-27 21:41:01.812729 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-27 21:41:01.812735 | orchestrator | Saturday 27 September 2025 21:33:00 +0000 (0:00:00.523) 0:02:47.127 **** 2025-09-27 21:41:01.812742 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-27 21:41:01.812748 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-27 21:41:01.812755 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-27 21:41:01.812762 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.812772 | orchestrator | 2025-09-27 21:41:01.812784 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-27 21:41:01.812795 | orchestrator | Saturday 27 September 2025 21:33:01 +0000 (0:00:00.623) 0:02:47.750 **** 2025-09-27 21:41:01.812806 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.812819 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.812831 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.812843 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.812851 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.812858 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.812864 | orchestrator | 2025-09-27 21:41:01.812877 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-27 21:41:01.812887 | orchestrator | Saturday 27 September 2025 21:33:02 +0000 (0:00:00.654) 0:02:48.405 **** 2025-09-27 21:41:01.812894 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-09-27 21:41:01.812901 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.812907 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-09-27 21:41:01.812914 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.812921 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-09-27 21:41:01.812927 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.812934 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-27 21:41:01.812940 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-27 21:41:01.812947 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-27 21:41:01.812953 | orchestrator | 2025-09-27 21:41:01.812960 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-09-27 21:41:01.812967 | orchestrator | Saturday 27 September 2025 21:33:04 +0000 (0:00:02.410) 0:02:50.815 **** 2025-09-27 21:41:01.812973 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:01.812980 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:01.812987 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:01.812993 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:01.813000 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:01.813006 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:01.813013 | orchestrator | 2025-09-27 21:41:01.813019 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-27 21:41:01.813026 | orchestrator | Saturday 27 September 2025 21:33:07 +0000 (0:00:02.623) 0:02:53.439 **** 2025-09-27 21:41:01.813033 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:01.813039 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:01.813046 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:01.813067 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:01.813074 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:01.813081 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:01.813087 | orchestrator | 2025-09-27 21:41:01.813094 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-27 21:41:01.813101 | orchestrator | Saturday 27 September 2025 21:33:08 +0000 (0:00:01.189) 0:02:54.629 **** 2025-09-27 21:41:01.813108 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.813114 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.813121 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.813127 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:41:01.813134 | orchestrator | 2025-09-27 21:41:01.813141 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-27 21:41:01.813147 | orchestrator | Saturday 27 September 2025 21:33:09 +0000 (0:00:01.228) 0:02:55.858 **** 2025-09-27 21:41:01.813154 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.813160 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.813167 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.813173 | orchestrator | 2025-09-27 21:41:01.813180 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-27 21:41:01.813211 | orchestrator | Saturday 27 September 2025 21:33:10 +0000 (0:00:00.354) 0:02:56.213 **** 2025-09-27 21:41:01.813219 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:01.813225 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:01.813232 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:01.813239 | orchestrator | 2025-09-27 21:41:01.813245 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-27 21:41:01.813252 | orchestrator | Saturday 27 September 2025 21:33:11 +0000 (0:00:01.629) 0:02:57.842 **** 2025-09-27 21:41:01.813259 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-27 21:41:01.813266 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-27 21:41:01.813272 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-27 21:41:01.813284 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.813291 | orchestrator | 2025-09-27 21:41:01.813297 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-27 21:41:01.813304 | orchestrator | Saturday 27 September 2025 21:33:12 +0000 (0:00:00.693) 0:02:58.536 **** 2025-09-27 21:41:01.813310 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.813317 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.813324 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.813330 | orchestrator | 2025-09-27 21:41:01.813337 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-27 21:41:01.813344 | orchestrator | Saturday 27 September 2025 21:33:12 +0000 (0:00:00.436) 0:02:58.972 **** 2025-09-27 21:41:01.813350 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.813357 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.813363 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.813370 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:01.813377 | orchestrator | 2025-09-27 21:41:01.813384 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-27 21:41:01.813390 | orchestrator | Saturday 27 September 2025 21:33:13 +0000 (0:00:00.907) 0:02:59.880 **** 2025-09-27 21:41:01.813397 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-27 21:41:01.813404 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-27 21:41:01.813410 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-27 21:41:01.813417 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.813423 | orchestrator | 2025-09-27 21:41:01.813430 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-27 21:41:01.813436 | orchestrator | Saturday 27 September 2025 21:33:14 +0000 (0:00:00.496) 0:03:00.377 **** 2025-09-27 21:41:01.813443 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.813450 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.813456 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.813463 | orchestrator | 2025-09-27 21:41:01.813470 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-27 21:41:01.813476 | orchestrator | Saturday 27 September 2025 21:33:14 +0000 (0:00:00.405) 0:03:00.783 **** 2025-09-27 21:41:01.813483 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.813489 | orchestrator | 2025-09-27 21:41:01.813499 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-27 21:41:01.813506 | orchestrator | Saturday 27 September 2025 21:33:14 +0000 (0:00:00.244) 0:03:01.027 **** 2025-09-27 21:41:01.813513 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.813519 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.813526 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.813532 | orchestrator | 2025-09-27 21:41:01.813539 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-27 21:41:01.813546 | orchestrator | Saturday 27 September 2025 21:33:15 +0000 (0:00:00.522) 0:03:01.550 **** 2025-09-27 21:41:01.813552 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.813559 | orchestrator | 2025-09-27 21:41:01.813566 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-27 21:41:01.813572 | orchestrator | Saturday 27 September 2025 21:33:15 +0000 (0:00:00.205) 0:03:01.755 **** 2025-09-27 21:41:01.813579 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.813585 | orchestrator | 2025-09-27 21:41:01.813592 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-27 21:41:01.813599 | orchestrator | Saturday 27 September 2025 21:33:15 +0000 (0:00:00.210) 0:03:01.965 **** 2025-09-27 21:41:01.813605 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.813612 | orchestrator | 2025-09-27 21:41:01.813618 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-27 21:41:01.813630 | orchestrator | Saturday 27 September 2025 21:33:15 +0000 (0:00:00.100) 0:03:02.066 **** 2025-09-27 21:41:01.813637 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.813643 | orchestrator | 2025-09-27 21:41:01.813650 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-27 21:41:01.813657 | orchestrator | Saturday 27 September 2025 21:33:16 +0000 (0:00:00.184) 0:03:02.251 **** 2025-09-27 21:41:01.813663 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.813670 | orchestrator | 2025-09-27 21:41:01.813676 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-27 21:41:01.813683 | orchestrator | Saturday 27 September 2025 21:33:16 +0000 (0:00:00.229) 0:03:02.480 **** 2025-09-27 21:41:01.813690 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-27 21:41:01.813696 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-27 21:41:01.813703 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-27 21:41:01.813709 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.813716 | orchestrator | 2025-09-27 21:41:01.813723 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-27 21:41:01.813729 | orchestrator | Saturday 27 September 2025 21:33:16 +0000 (0:00:00.523) 0:03:03.003 **** 2025-09-27 21:41:01.813736 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.813742 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.813749 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.813755 | orchestrator | 2025-09-27 21:41:01.813788 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-27 21:41:01.813801 | orchestrator | Saturday 27 September 2025 21:33:17 +0000 (0:00:00.792) 0:03:03.795 **** 2025-09-27 21:41:01.813813 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.813825 | orchestrator | 2025-09-27 21:41:01.813835 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-27 21:41:01.813842 | orchestrator | Saturday 27 September 2025 21:33:17 +0000 (0:00:00.207) 0:03:04.003 **** 2025-09-27 21:41:01.813849 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.813856 | orchestrator | 2025-09-27 21:41:01.813862 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-27 21:41:01.813869 | orchestrator | Saturday 27 September 2025 21:33:18 +0000 (0:00:00.274) 0:03:04.277 **** 2025-09-27 21:41:01.813876 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.813882 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.813889 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.813896 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:01.813902 | orchestrator | 2025-09-27 21:41:01.813909 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-27 21:41:01.813916 | orchestrator | Saturday 27 September 2025 21:33:19 +0000 (0:00:01.030) 0:03:05.308 **** 2025-09-27 21:41:01.813923 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.813929 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.813936 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.813943 | orchestrator | 2025-09-27 21:41:01.813949 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-27 21:41:01.813956 | orchestrator | Saturday 27 September 2025 21:33:19 +0000 (0:00:00.401) 0:03:05.710 **** 2025-09-27 21:41:01.813963 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:01.813969 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:01.813976 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:01.813983 | orchestrator | 2025-09-27 21:41:01.813989 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-27 21:41:01.813996 | orchestrator | Saturday 27 September 2025 21:33:20 +0000 (0:00:01.340) 0:03:07.051 **** 2025-09-27 21:41:01.814003 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-27 21:41:01.814009 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-27 21:41:01.814043 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-27 21:41:01.814066 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.814074 | orchestrator | 2025-09-27 21:41:01.814080 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-27 21:41:01.814087 | orchestrator | Saturday 27 September 2025 21:33:21 +0000 (0:00:00.662) 0:03:07.713 **** 2025-09-27 21:41:01.814094 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.814100 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.814107 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.814113 | orchestrator | 2025-09-27 21:41:01.814120 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-27 21:41:01.814130 | orchestrator | Saturday 27 September 2025 21:33:21 +0000 (0:00:00.290) 0:03:08.003 **** 2025-09-27 21:41:01.814137 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.814144 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.814150 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.814157 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:01.814163 | orchestrator | 2025-09-27 21:41:01.814170 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-27 21:41:01.814177 | orchestrator | Saturday 27 September 2025 21:33:23 +0000 (0:00:01.200) 0:03:09.204 **** 2025-09-27 21:41:01.814183 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.814190 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.814197 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.814203 | orchestrator | 2025-09-27 21:41:01.814210 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-27 21:41:01.814216 | orchestrator | Saturday 27 September 2025 21:33:23 +0000 (0:00:00.448) 0:03:09.653 **** 2025-09-27 21:41:01.814223 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:01.814230 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:01.814236 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:01.814243 | orchestrator | 2025-09-27 21:41:01.814249 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-27 21:41:01.814256 | orchestrator | Saturday 27 September 2025 21:33:24 +0000 (0:00:01.389) 0:03:11.043 **** 2025-09-27 21:41:01.814262 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-27 21:41:01.814269 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-27 21:41:01.814279 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-27 21:41:01.814289 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.814300 | orchestrator | 2025-09-27 21:41:01.814311 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-27 21:41:01.814321 | orchestrator | Saturday 27 September 2025 21:33:25 +0000 (0:00:00.542) 0:03:11.585 **** 2025-09-27 21:41:01.814328 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.814335 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.814342 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.814348 | orchestrator | 2025-09-27 21:41:01.814355 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-09-27 21:41:01.814361 | orchestrator | Saturday 27 September 2025 21:33:25 +0000 (0:00:00.292) 0:03:11.878 **** 2025-09-27 21:41:01.814368 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.814374 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.814381 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.814388 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.814394 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.814401 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.814407 | orchestrator | 2025-09-27 21:41:01.814414 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-27 21:41:01.814420 | orchestrator | Saturday 27 September 2025 21:33:26 +0000 (0:00:00.664) 0:03:12.542 **** 2025-09-27 21:41:01.814453 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.814460 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.814476 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.814488 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:41:01.814499 | orchestrator | 2025-09-27 21:41:01.814510 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-27 21:41:01.814519 | orchestrator | Saturday 27 September 2025 21:33:27 +0000 (0:00:01.069) 0:03:13.612 **** 2025-09-27 21:41:01.814526 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.814532 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.814539 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.814546 | orchestrator | 2025-09-27 21:41:01.814552 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-27 21:41:01.814559 | orchestrator | Saturday 27 September 2025 21:33:27 +0000 (0:00:00.277) 0:03:13.890 **** 2025-09-27 21:41:01.814565 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:01.814572 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:01.814579 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:01.814585 | orchestrator | 2025-09-27 21:41:01.814592 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-27 21:41:01.814599 | orchestrator | Saturday 27 September 2025 21:33:29 +0000 (0:00:01.507) 0:03:15.397 **** 2025-09-27 21:41:01.814605 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-27 21:41:01.814612 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-27 21:41:01.814619 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-27 21:41:01.814625 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.814632 | orchestrator | 2025-09-27 21:41:01.814639 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-27 21:41:01.814645 | orchestrator | Saturday 27 September 2025 21:33:29 +0000 (0:00:00.665) 0:03:16.062 **** 2025-09-27 21:41:01.814652 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.814658 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.814665 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.814671 | orchestrator | 2025-09-27 21:41:01.814678 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-09-27 21:41:01.814685 | orchestrator | 2025-09-27 21:41:01.814691 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-27 21:41:01.814698 | orchestrator | Saturday 27 September 2025 21:33:30 +0000 (0:00:00.619) 0:03:16.681 **** 2025-09-27 21:41:01.814705 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:41:01.814711 | orchestrator | 2025-09-27 21:41:01.814718 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-27 21:41:01.814725 | orchestrator | Saturday 27 September 2025 21:33:31 +0000 (0:00:00.615) 0:03:17.297 **** 2025-09-27 21:41:01.814731 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:41:01.814738 | orchestrator | 2025-09-27 21:41:01.814749 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-27 21:41:01.814756 | orchestrator | Saturday 27 September 2025 21:33:31 +0000 (0:00:00.529) 0:03:17.827 **** 2025-09-27 21:41:01.814762 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.814772 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.814783 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.814795 | orchestrator | 2025-09-27 21:41:01.814806 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-27 21:41:01.814817 | orchestrator | Saturday 27 September 2025 21:33:32 +0000 (0:00:00.715) 0:03:18.543 **** 2025-09-27 21:41:01.814829 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.814841 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.814853 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.814860 | orchestrator | 2025-09-27 21:41:01.814867 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-27 21:41:01.814879 | orchestrator | Saturday 27 September 2025 21:33:32 +0000 (0:00:00.501) 0:03:19.045 **** 2025-09-27 21:41:01.814886 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.814892 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.814899 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.814906 | orchestrator | 2025-09-27 21:41:01.814912 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-27 21:41:01.814919 | orchestrator | Saturday 27 September 2025 21:33:33 +0000 (0:00:00.269) 0:03:19.314 **** 2025-09-27 21:41:01.814925 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.814932 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.814939 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.814945 | orchestrator | 2025-09-27 21:41:01.814952 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-27 21:41:01.814959 | orchestrator | Saturday 27 September 2025 21:33:33 +0000 (0:00:00.305) 0:03:19.620 **** 2025-09-27 21:41:01.814965 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.814972 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.814979 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.814985 | orchestrator | 2025-09-27 21:41:01.814992 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-27 21:41:01.814999 | orchestrator | Saturday 27 September 2025 21:33:34 +0000 (0:00:00.858) 0:03:20.478 **** 2025-09-27 21:41:01.815005 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.815012 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.815019 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.815025 | orchestrator | 2025-09-27 21:41:01.815032 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-27 21:41:01.815038 | orchestrator | Saturday 27 September 2025 21:33:34 +0000 (0:00:00.446) 0:03:20.924 **** 2025-09-27 21:41:01.815045 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.815091 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.815098 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.815105 | orchestrator | 2025-09-27 21:41:01.815111 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-27 21:41:01.815144 | orchestrator | Saturday 27 September 2025 21:33:35 +0000 (0:00:00.276) 0:03:21.201 **** 2025-09-27 21:41:01.815152 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.815159 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.815166 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.815172 | orchestrator | 2025-09-27 21:41:01.815179 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-27 21:41:01.815185 | orchestrator | Saturday 27 September 2025 21:33:35 +0000 (0:00:00.842) 0:03:22.043 **** 2025-09-27 21:41:01.815192 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.815199 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.815205 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.815212 | orchestrator | 2025-09-27 21:41:01.815219 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-27 21:41:01.815225 | orchestrator | Saturday 27 September 2025 21:33:36 +0000 (0:00:00.812) 0:03:22.855 **** 2025-09-27 21:41:01.815232 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.815239 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.815245 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.815252 | orchestrator | 2025-09-27 21:41:01.815259 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-27 21:41:01.815265 | orchestrator | Saturday 27 September 2025 21:33:37 +0000 (0:00:00.447) 0:03:23.303 **** 2025-09-27 21:41:01.815272 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.815279 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.815285 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.815292 | orchestrator | 2025-09-27 21:41:01.815298 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-27 21:41:01.815305 | orchestrator | Saturday 27 September 2025 21:33:37 +0000 (0:00:00.323) 0:03:23.626 **** 2025-09-27 21:41:01.815317 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.815323 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.815330 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.815337 | orchestrator | 2025-09-27 21:41:01.815343 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-27 21:41:01.815349 | orchestrator | Saturday 27 September 2025 21:33:37 +0000 (0:00:00.285) 0:03:23.911 **** 2025-09-27 21:41:01.815356 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.815362 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.815368 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.815374 | orchestrator | 2025-09-27 21:41:01.815380 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-27 21:41:01.815386 | orchestrator | Saturday 27 September 2025 21:33:38 +0000 (0:00:00.371) 0:03:24.283 **** 2025-09-27 21:41:01.815392 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.815399 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.815405 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.815411 | orchestrator | 2025-09-27 21:41:01.815417 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-27 21:41:01.815423 | orchestrator | Saturday 27 September 2025 21:33:38 +0000 (0:00:00.633) 0:03:24.917 **** 2025-09-27 21:41:01.815429 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.815435 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.815442 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.815448 | orchestrator | 2025-09-27 21:41:01.815457 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-27 21:41:01.815464 | orchestrator | Saturday 27 September 2025 21:33:39 +0000 (0:00:00.901) 0:03:25.819 **** 2025-09-27 21:41:01.815470 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.815476 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.815482 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.815488 | orchestrator | 2025-09-27 21:41:01.815494 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-27 21:41:01.815500 | orchestrator | Saturday 27 September 2025 21:33:40 +0000 (0:00:00.651) 0:03:26.471 **** 2025-09-27 21:41:01.815507 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.815513 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.815519 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.815525 | orchestrator | 2025-09-27 21:41:01.815531 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-27 21:41:01.815537 | orchestrator | Saturday 27 September 2025 21:33:40 +0000 (0:00:00.525) 0:03:26.996 **** 2025-09-27 21:41:01.815544 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.815550 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.815556 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.815562 | orchestrator | 2025-09-27 21:41:01.815568 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-27 21:41:01.815574 | orchestrator | Saturday 27 September 2025 21:33:41 +0000 (0:00:00.483) 0:03:27.479 **** 2025-09-27 21:41:01.815580 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.815587 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.815593 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.815599 | orchestrator | 2025-09-27 21:41:01.815608 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-09-27 21:41:01.815619 | orchestrator | Saturday 27 September 2025 21:33:42 +0000 (0:00:00.721) 0:03:28.201 **** 2025-09-27 21:41:01.815635 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.815646 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.815657 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.815667 | orchestrator | 2025-09-27 21:41:01.815678 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-09-27 21:41:01.815687 | orchestrator | Saturday 27 September 2025 21:33:42 +0000 (0:00:00.512) 0:03:28.713 **** 2025-09-27 21:41:01.815698 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:41:01.815716 | orchestrator | 2025-09-27 21:41:01.815727 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-09-27 21:41:01.815737 | orchestrator | Saturday 27 September 2025 21:33:43 +0000 (0:00:00.828) 0:03:29.542 **** 2025-09-27 21:41:01.815747 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.815759 | orchestrator | 2025-09-27 21:41:01.815769 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-09-27 21:41:01.815780 | orchestrator | Saturday 27 September 2025 21:33:43 +0000 (0:00:00.111) 0:03:29.653 **** 2025-09-27 21:41:01.815791 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-09-27 21:41:01.815802 | orchestrator | 2025-09-27 21:41:01.815848 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-09-27 21:41:01.815863 | orchestrator | Saturday 27 September 2025 21:33:44 +0000 (0:00:00.902) 0:03:30.556 **** 2025-09-27 21:41:01.815873 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.815884 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.815894 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.815905 | orchestrator | 2025-09-27 21:41:01.815916 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-09-27 21:41:01.815927 | orchestrator | Saturday 27 September 2025 21:33:44 +0000 (0:00:00.341) 0:03:30.897 **** 2025-09-27 21:41:01.815937 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.815944 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.815950 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.815956 | orchestrator | 2025-09-27 21:41:01.815962 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-09-27 21:41:01.815968 | orchestrator | Saturday 27 September 2025 21:33:45 +0000 (0:00:00.430) 0:03:31.328 **** 2025-09-27 21:41:01.815975 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:01.815981 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:01.815987 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:01.815993 | orchestrator | 2025-09-27 21:41:01.815999 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-09-27 21:41:01.816005 | orchestrator | Saturday 27 September 2025 21:33:46 +0000 (0:00:01.186) 0:03:32.514 **** 2025-09-27 21:41:01.816012 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:01.816018 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:01.816024 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:01.816030 | orchestrator | 2025-09-27 21:41:01.816036 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-09-27 21:41:01.816042 | orchestrator | Saturday 27 September 2025 21:33:47 +0000 (0:00:00.974) 0:03:33.489 **** 2025-09-27 21:41:01.816066 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:01.816076 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:01.816082 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:01.816088 | orchestrator | 2025-09-27 21:41:01.816094 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-09-27 21:41:01.816100 | orchestrator | Saturday 27 September 2025 21:33:48 +0000 (0:00:00.830) 0:03:34.319 **** 2025-09-27 21:41:01.816106 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.816113 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.816119 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.816125 | orchestrator | 2025-09-27 21:41:01.816131 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-09-27 21:41:01.816137 | orchestrator | Saturday 27 September 2025 21:33:48 +0000 (0:00:00.835) 0:03:35.155 **** 2025-09-27 21:41:01.816143 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:01.816149 | orchestrator | 2025-09-27 21:41:01.816155 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-09-27 21:41:01.816162 | orchestrator | Saturday 27 September 2025 21:33:51 +0000 (0:00:02.063) 0:03:37.219 **** 2025-09-27 21:41:01.816168 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.816174 | orchestrator | 2025-09-27 21:41:01.816180 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-09-27 21:41:01.816200 | orchestrator | Saturday 27 September 2025 21:33:51 +0000 (0:00:00.833) 0:03:38.052 **** 2025-09-27 21:41:01.816207 | orchestrator | changed: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 21:41:01.816213 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-27 21:41:01.816220 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 21:41:01.816226 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-27 21:41:01.816232 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-27 21:41:01.816238 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-27 21:41:01.816244 | orchestrator | changed: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-27 21:41:01.816250 | orchestrator | changed: [testbed-node-1 -> {{ item }}] 2025-09-27 21:41:01.816256 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-27 21:41:01.816263 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2025-09-27 21:41:01.816269 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-09-27 21:41:01.816275 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-09-27 21:41:01.816281 | orchestrator | 2025-09-27 21:41:01.816287 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-09-27 21:41:01.816293 | orchestrator | Saturday 27 September 2025 21:33:55 +0000 (0:00:03.852) 0:03:41.904 **** 2025-09-27 21:41:01.816299 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:01.816305 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:01.816312 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:01.816318 | orchestrator | 2025-09-27 21:41:01.816324 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-09-27 21:41:01.816330 | orchestrator | Saturday 27 September 2025 21:33:56 +0000 (0:00:01.023) 0:03:42.928 **** 2025-09-27 21:41:01.816336 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.816342 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.816348 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.816355 | orchestrator | 2025-09-27 21:41:01.816361 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-09-27 21:41:01.816367 | orchestrator | Saturday 27 September 2025 21:33:57 +0000 (0:00:00.285) 0:03:43.213 **** 2025-09-27 21:41:01.816373 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.816379 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.816385 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.816391 | orchestrator | 2025-09-27 21:41:01.816397 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-09-27 21:41:01.816404 | orchestrator | Saturday 27 September 2025 21:33:57 +0000 (0:00:00.268) 0:03:43.481 **** 2025-09-27 21:41:01.816410 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:01.816416 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:01.816422 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:01.816428 | orchestrator | 2025-09-27 21:41:01.816434 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-09-27 21:41:01.816465 | orchestrator | Saturday 27 September 2025 21:33:59 +0000 (0:00:01.763) 0:03:45.244 **** 2025-09-27 21:41:01.816472 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:01.816478 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:01.816484 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:01.816491 | orchestrator | 2025-09-27 21:41:01.816497 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-09-27 21:41:01.816503 | orchestrator | Saturday 27 September 2025 21:34:00 +0000 (0:00:01.173) 0:03:46.418 **** 2025-09-27 21:41:01.816509 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.816516 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.816522 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.816528 | orchestrator | 2025-09-27 21:41:01.816534 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-09-27 21:41:01.816544 | orchestrator | Saturday 27 September 2025 21:34:00 +0000 (0:00:00.360) 0:03:46.779 **** 2025-09-27 21:41:01.816551 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:41:01.816557 | orchestrator | 2025-09-27 21:41:01.816563 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-09-27 21:41:01.816570 | orchestrator | Saturday 27 September 2025 21:34:01 +0000 (0:00:00.490) 0:03:47.269 **** 2025-09-27 21:41:01.816576 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.816582 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.816588 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.816594 | orchestrator | 2025-09-27 21:41:01.816600 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-09-27 21:41:01.816607 | orchestrator | Saturday 27 September 2025 21:34:01 +0000 (0:00:00.411) 0:03:47.680 **** 2025-09-27 21:41:01.816613 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.816620 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.816626 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.816632 | orchestrator | 2025-09-27 21:41:01.816638 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-09-27 21:41:01.816644 | orchestrator | Saturday 27 September 2025 21:34:01 +0000 (0:00:00.252) 0:03:47.932 **** 2025-09-27 21:41:01.816651 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:41:01.816657 | orchestrator | 2025-09-27 21:41:01.816667 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-09-27 21:41:01.816673 | orchestrator | Saturday 27 September 2025 21:34:02 +0000 (0:00:00.495) 0:03:48.428 **** 2025-09-27 21:41:01.816679 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:01.816685 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:01.816691 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:01.816697 | orchestrator | 2025-09-27 21:41:01.816703 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-09-27 21:41:01.816710 | orchestrator | Saturday 27 September 2025 21:34:04 +0000 (0:00:02.261) 0:03:50.690 **** 2025-09-27 21:41:01.816716 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:01.816722 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:01.816728 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:01.816734 | orchestrator | 2025-09-27 21:41:01.816743 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-09-27 21:41:01.816750 | orchestrator | Saturday 27 September 2025 21:34:05 +0000 (0:00:01.240) 0:03:51.931 **** 2025-09-27 21:41:01.816756 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:01.816763 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:01.816769 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:01.816775 | orchestrator | 2025-09-27 21:41:01.816781 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-09-27 21:41:01.816788 | orchestrator | Saturday 27 September 2025 21:34:07 +0000 (0:00:01.981) 0:03:53.912 **** 2025-09-27 21:41:01.816794 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:01.816800 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:01.816806 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:01.816812 | orchestrator | 2025-09-27 21:41:01.816819 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-09-27 21:41:01.816825 | orchestrator | Saturday 27 September 2025 21:34:10 +0000 (0:00:02.315) 0:03:56.228 **** 2025-09-27 21:41:01.816831 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:41:01.816837 | orchestrator | 2025-09-27 21:41:01.816843 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-09-27 21:41:01.816850 | orchestrator | Saturday 27 September 2025 21:34:10 +0000 (0:00:00.799) 0:03:57.027 **** 2025-09-27 21:41:01.816856 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-09-27 21:41:01.816867 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.816874 | orchestrator | 2025-09-27 21:41:01.816880 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-09-27 21:41:01.816886 | orchestrator | Saturday 27 September 2025 21:34:32 +0000 (0:00:22.086) 0:04:19.114 **** 2025-09-27 21:41:01.816893 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.816899 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.816905 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.816911 | orchestrator | 2025-09-27 21:41:01.816918 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-09-27 21:41:01.816924 | orchestrator | Saturday 27 September 2025 21:34:43 +0000 (0:00:10.284) 0:04:29.400 **** 2025-09-27 21:41:01.816930 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.816936 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.816943 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.816949 | orchestrator | 2025-09-27 21:41:01.816955 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-09-27 21:41:01.816962 | orchestrator | Saturday 27 September 2025 21:34:43 +0000 (0:00:00.301) 0:04:29.701 **** 2025-09-27 21:41:01.817000 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6027392b807dd3d4ffa70184b5d9b85f51448da4'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-09-27 21:41:01.817018 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6027392b807dd3d4ffa70184b5d9b85f51448da4'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-09-27 21:41:01.817030 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6027392b807dd3d4ffa70184b5d9b85f51448da4'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-09-27 21:41:01.817041 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6027392b807dd3d4ffa70184b5d9b85f51448da4'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-09-27 21:41:01.817068 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6027392b807dd3d4ffa70184b5d9b85f51448da4'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-09-27 21:41:01.817085 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6027392b807dd3d4ffa70184b5d9b85f51448da4'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__6027392b807dd3d4ffa70184b5d9b85f51448da4'}])  2025-09-27 21:41:01.817093 | orchestrator | 2025-09-27 21:41:01.817099 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-27 21:41:01.817106 | orchestrator | Saturday 27 September 2025 21:34:59 +0000 (0:00:15.867) 0:04:45.568 **** 2025-09-27 21:41:01.817112 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.817119 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.817130 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.817136 | orchestrator | 2025-09-27 21:41:01.817143 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-27 21:41:01.817149 | orchestrator | Saturday 27 September 2025 21:34:59 +0000 (0:00:00.446) 0:04:46.015 **** 2025-09-27 21:41:01.817155 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:41:01.817161 | orchestrator | 2025-09-27 21:41:01.817168 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-27 21:41:01.817174 | orchestrator | Saturday 27 September 2025 21:35:00 +0000 (0:00:00.787) 0:04:46.802 **** 2025-09-27 21:41:01.817180 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.817186 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.817192 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.817199 | orchestrator | 2025-09-27 21:41:01.817205 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-27 21:41:01.817211 | orchestrator | Saturday 27 September 2025 21:35:00 +0000 (0:00:00.337) 0:04:47.140 **** 2025-09-27 21:41:01.817217 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.817223 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.817229 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.817236 | orchestrator | 2025-09-27 21:41:01.817242 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-27 21:41:01.817248 | orchestrator | Saturday 27 September 2025 21:35:01 +0000 (0:00:00.325) 0:04:47.465 **** 2025-09-27 21:41:01.817254 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-27 21:41:01.817260 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-27 21:41:01.817267 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-27 21:41:01.817273 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.817280 | orchestrator | 2025-09-27 21:41:01.817286 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-27 21:41:01.817292 | orchestrator | Saturday 27 September 2025 21:35:02 +0000 (0:00:00.837) 0:04:48.302 **** 2025-09-27 21:41:01.817298 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.817304 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.817310 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.817317 | orchestrator | 2025-09-27 21:41:01.817323 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-09-27 21:41:01.817329 | orchestrator | 2025-09-27 21:41:01.817335 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-27 21:41:01.817364 | orchestrator | Saturday 27 September 2025 21:35:02 +0000 (0:00:00.796) 0:04:49.099 **** 2025-09-27 21:41:01.817372 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:41:01.817378 | orchestrator | 2025-09-27 21:41:01.817384 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-27 21:41:01.817391 | orchestrator | Saturday 27 September 2025 21:35:03 +0000 (0:00:00.509) 0:04:49.608 **** 2025-09-27 21:41:01.817397 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:41:01.817403 | orchestrator | 2025-09-27 21:41:01.817409 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-27 21:41:01.817415 | orchestrator | Saturday 27 September 2025 21:35:04 +0000 (0:00:00.687) 0:04:50.295 **** 2025-09-27 21:41:01.817422 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.817428 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.817434 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.817440 | orchestrator | 2025-09-27 21:41:01.817446 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-27 21:41:01.817453 | orchestrator | Saturday 27 September 2025 21:35:04 +0000 (0:00:00.682) 0:04:50.978 **** 2025-09-27 21:41:01.817459 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.817469 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.817475 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.817481 | orchestrator | 2025-09-27 21:41:01.817487 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-27 21:41:01.817494 | orchestrator | Saturday 27 September 2025 21:35:05 +0000 (0:00:00.263) 0:04:51.241 **** 2025-09-27 21:41:01.817500 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.817506 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.817512 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.817518 | orchestrator | 2025-09-27 21:41:01.817525 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-27 21:41:01.817531 | orchestrator | Saturday 27 September 2025 21:35:05 +0000 (0:00:00.293) 0:04:51.534 **** 2025-09-27 21:41:01.817537 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.817597 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.817623 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.817630 | orchestrator | 2025-09-27 21:41:01.817636 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-27 21:41:01.817642 | orchestrator | Saturday 27 September 2025 21:35:05 +0000 (0:00:00.449) 0:04:51.984 **** 2025-09-27 21:41:01.817648 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.817654 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.817661 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.817667 | orchestrator | 2025-09-27 21:41:01.817673 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-27 21:41:01.817679 | orchestrator | Saturday 27 September 2025 21:35:06 +0000 (0:00:00.717) 0:04:52.702 **** 2025-09-27 21:41:01.817686 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.817692 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.817702 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.817709 | orchestrator | 2025-09-27 21:41:01.817715 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-27 21:41:01.817721 | orchestrator | Saturday 27 September 2025 21:35:06 +0000 (0:00:00.267) 0:04:52.970 **** 2025-09-27 21:41:01.817727 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.817734 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.817740 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.817746 | orchestrator | 2025-09-27 21:41:01.817752 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-27 21:41:01.817758 | orchestrator | Saturday 27 September 2025 21:35:07 +0000 (0:00:00.264) 0:04:53.234 **** 2025-09-27 21:41:01.817765 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.817771 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.817777 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.817783 | orchestrator | 2025-09-27 21:41:01.817789 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-27 21:41:01.817796 | orchestrator | Saturday 27 September 2025 21:35:07 +0000 (0:00:00.710) 0:04:53.945 **** 2025-09-27 21:41:01.817802 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.817808 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.817814 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.817820 | orchestrator | 2025-09-27 21:41:01.817827 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-27 21:41:01.817833 | orchestrator | Saturday 27 September 2025 21:35:08 +0000 (0:00:00.884) 0:04:54.829 **** 2025-09-27 21:41:01.817839 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.817845 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.817851 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.817857 | orchestrator | 2025-09-27 21:41:01.817864 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-27 21:41:01.817870 | orchestrator | Saturday 27 September 2025 21:35:08 +0000 (0:00:00.254) 0:04:55.083 **** 2025-09-27 21:41:01.817876 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.817882 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.817892 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.817899 | orchestrator | 2025-09-27 21:41:01.817905 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-27 21:41:01.817911 | orchestrator | Saturday 27 September 2025 21:35:09 +0000 (0:00:00.266) 0:04:55.350 **** 2025-09-27 21:41:01.817917 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.817923 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.817929 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.817936 | orchestrator | 2025-09-27 21:41:01.817942 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-27 21:41:01.817970 | orchestrator | Saturday 27 September 2025 21:35:09 +0000 (0:00:00.264) 0:04:55.615 **** 2025-09-27 21:41:01.817978 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.817984 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.817990 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.817996 | orchestrator | 2025-09-27 21:41:01.818002 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-27 21:41:01.818122 | orchestrator | Saturday 27 September 2025 21:35:09 +0000 (0:00:00.422) 0:04:56.037 **** 2025-09-27 21:41:01.818134 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.818140 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.818146 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.818153 | orchestrator | 2025-09-27 21:41:01.818159 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-27 21:41:01.818165 | orchestrator | Saturday 27 September 2025 21:35:10 +0000 (0:00:00.290) 0:04:56.327 **** 2025-09-27 21:41:01.818171 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.818178 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.818184 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.818190 | orchestrator | 2025-09-27 21:41:01.818196 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-27 21:41:01.818203 | orchestrator | Saturday 27 September 2025 21:35:10 +0000 (0:00:00.265) 0:04:56.593 **** 2025-09-27 21:41:01.818209 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.818215 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.818221 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.818227 | orchestrator | 2025-09-27 21:41:01.818234 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-27 21:41:01.818240 | orchestrator | Saturday 27 September 2025 21:35:10 +0000 (0:00:00.251) 0:04:56.844 **** 2025-09-27 21:41:01.818246 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.818252 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.818258 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.818265 | orchestrator | 2025-09-27 21:41:01.818271 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-27 21:41:01.818277 | orchestrator | Saturday 27 September 2025 21:35:11 +0000 (0:00:00.453) 0:04:57.298 **** 2025-09-27 21:41:01.818283 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.818290 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.818296 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.818302 | orchestrator | 2025-09-27 21:41:01.818308 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-27 21:41:01.818314 | orchestrator | Saturday 27 September 2025 21:35:11 +0000 (0:00:00.286) 0:04:57.584 **** 2025-09-27 21:41:01.818321 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.818337 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.818344 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.818351 | orchestrator | 2025-09-27 21:41:01.818357 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-09-27 21:41:01.818363 | orchestrator | Saturday 27 September 2025 21:35:11 +0000 (0:00:00.470) 0:04:58.054 **** 2025-09-27 21:41:01.818372 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-27 21:41:01.818383 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-27 21:41:01.818400 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-27 21:41:01.818456 | orchestrator | 2025-09-27 21:41:01.818469 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-09-27 21:41:01.818480 | orchestrator | Saturday 27 September 2025 21:35:12 +0000 (0:00:00.839) 0:04:58.894 **** 2025-09-27 21:41:01.818495 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:41:01.818507 | orchestrator | 2025-09-27 21:41:01.818517 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-09-27 21:41:01.818526 | orchestrator | Saturday 27 September 2025 21:35:13 +0000 (0:00:00.764) 0:04:59.658 **** 2025-09-27 21:41:01.818532 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:01.818539 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:01.818545 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:01.818551 | orchestrator | 2025-09-27 21:41:01.818557 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-09-27 21:41:01.818563 | orchestrator | Saturday 27 September 2025 21:35:14 +0000 (0:00:00.720) 0:05:00.379 **** 2025-09-27 21:41:01.818569 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.818576 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.818582 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.818588 | orchestrator | 2025-09-27 21:41:01.818594 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-09-27 21:41:01.818600 | orchestrator | Saturday 27 September 2025 21:35:14 +0000 (0:00:00.309) 0:05:00.688 **** 2025-09-27 21:41:01.818607 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-27 21:41:01.818613 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-27 21:41:01.818619 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-27 21:41:01.818625 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-09-27 21:41:01.818632 | orchestrator | 2025-09-27 21:41:01.818638 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-09-27 21:41:01.818682 | orchestrator | Saturday 27 September 2025 21:35:25 +0000 (0:00:11.333) 0:05:12.022 **** 2025-09-27 21:41:01.818689 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.818695 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.818701 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.818707 | orchestrator | 2025-09-27 21:41:01.818714 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-09-27 21:41:01.818720 | orchestrator | Saturday 27 September 2025 21:35:26 +0000 (0:00:00.618) 0:05:12.640 **** 2025-09-27 21:41:01.818726 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-27 21:41:01.818732 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-27 21:41:01.818739 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-27 21:41:01.818745 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-27 21:41:01.818751 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 21:41:01.818757 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 21:41:01.818764 | orchestrator | 2025-09-27 21:41:01.818770 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-09-27 21:41:01.818776 | orchestrator | Saturday 27 September 2025 21:35:28 +0000 (0:00:02.415) 0:05:15.055 **** 2025-09-27 21:41:01.818810 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-27 21:41:01.818818 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-27 21:41:01.818824 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-27 21:41:01.818830 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-27 21:41:01.818837 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-27 21:41:01.818843 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-27 21:41:01.818849 | orchestrator | 2025-09-27 21:41:01.818875 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-09-27 21:41:01.818889 | orchestrator | Saturday 27 September 2025 21:35:30 +0000 (0:00:01.267) 0:05:16.322 **** 2025-09-27 21:41:01.818896 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.818902 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.818908 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.818914 | orchestrator | 2025-09-27 21:41:01.818921 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-09-27 21:41:01.818927 | orchestrator | Saturday 27 September 2025 21:35:30 +0000 (0:00:00.782) 0:05:17.105 **** 2025-09-27 21:41:01.818933 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.818939 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.818945 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.818952 | orchestrator | 2025-09-27 21:41:01.818979 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-09-27 21:41:01.818987 | orchestrator | Saturday 27 September 2025 21:35:31 +0000 (0:00:00.556) 0:05:17.662 **** 2025-09-27 21:41:01.818993 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.818999 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.819006 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.819012 | orchestrator | 2025-09-27 21:41:01.819018 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-09-27 21:41:01.819025 | orchestrator | Saturday 27 September 2025 21:35:31 +0000 (0:00:00.303) 0:05:17.966 **** 2025-09-27 21:41:01.819031 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:41:01.819038 | orchestrator | 2025-09-27 21:41:01.819044 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-09-27 21:41:01.819087 | orchestrator | Saturday 27 September 2025 21:35:32 +0000 (0:00:00.506) 0:05:18.473 **** 2025-09-27 21:41:01.819094 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.819101 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.819107 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.819113 | orchestrator | 2025-09-27 21:41:01.819119 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-09-27 21:41:01.819126 | orchestrator | Saturday 27 September 2025 21:35:32 +0000 (0:00:00.637) 0:05:19.111 **** 2025-09-27 21:41:01.819132 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.819138 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.819144 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.819151 | orchestrator | 2025-09-27 21:41:01.819157 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-09-27 21:41:01.819185 | orchestrator | Saturday 27 September 2025 21:35:33 +0000 (0:00:00.316) 0:05:19.427 **** 2025-09-27 21:41:01.819193 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:41:01.819200 | orchestrator | 2025-09-27 21:41:01.819206 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-09-27 21:41:01.819212 | orchestrator | Saturday 27 September 2025 21:35:33 +0000 (0:00:00.505) 0:05:19.932 **** 2025-09-27 21:41:01.819218 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:01.819224 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:01.819231 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:01.819237 | orchestrator | 2025-09-27 21:41:01.819243 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-09-27 21:41:01.819249 | orchestrator | Saturday 27 September 2025 21:35:35 +0000 (0:00:01.375) 0:05:21.308 **** 2025-09-27 21:41:01.819266 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:01.819273 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:01.819279 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:01.819285 | orchestrator | 2025-09-27 21:41:01.819292 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-09-27 21:41:01.819298 | orchestrator | Saturday 27 September 2025 21:35:36 +0000 (0:00:01.130) 0:05:22.439 **** 2025-09-27 21:41:01.819304 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:01.819316 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:01.819322 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:01.819329 | orchestrator | 2025-09-27 21:41:01.819335 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-09-27 21:41:01.819341 | orchestrator | Saturday 27 September 2025 21:35:38 +0000 (0:00:01.882) 0:05:24.321 **** 2025-09-27 21:41:01.819367 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:01.819373 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:01.819380 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:01.819386 | orchestrator | 2025-09-27 21:41:01.819392 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-09-27 21:41:01.819398 | orchestrator | Saturday 27 September 2025 21:35:39 +0000 (0:00:01.857) 0:05:26.179 **** 2025-09-27 21:41:01.819404 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.819411 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.819417 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-09-27 21:41:01.819423 | orchestrator | 2025-09-27 21:41:01.819429 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-09-27 21:41:01.819435 | orchestrator | Saturday 27 September 2025 21:35:40 +0000 (0:00:00.640) 0:05:26.820 **** 2025-09-27 21:41:01.819442 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-09-27 21:41:01.819448 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-09-27 21:41:01.819489 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-09-27 21:41:01.819502 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-09-27 21:41:01.819513 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-09-27 21:41:01.819523 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2025-09-27 21:41:01.819535 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-27 21:41:01.819543 | orchestrator | 2025-09-27 21:41:01.819549 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-09-27 21:41:01.819556 | orchestrator | Saturday 27 September 2025 21:36:17 +0000 (0:00:36.682) 0:06:03.502 **** 2025-09-27 21:41:01.819562 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-27 21:41:01.819568 | orchestrator | 2025-09-27 21:41:01.819574 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-09-27 21:41:01.819580 | orchestrator | Saturday 27 September 2025 21:36:18 +0000 (0:00:01.396) 0:06:04.899 **** 2025-09-27 21:41:01.819587 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.819614 | orchestrator | 2025-09-27 21:41:01.819621 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-09-27 21:41:01.819626 | orchestrator | Saturday 27 September 2025 21:36:19 +0000 (0:00:00.314) 0:06:05.213 **** 2025-09-27 21:41:01.819632 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.819637 | orchestrator | 2025-09-27 21:41:01.819643 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-09-27 21:41:01.819648 | orchestrator | Saturday 27 September 2025 21:36:19 +0000 (0:00:00.156) 0:06:05.369 **** 2025-09-27 21:41:01.819654 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-09-27 21:41:01.819659 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-09-27 21:41:01.819665 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-09-27 21:41:01.819670 | orchestrator | 2025-09-27 21:41:01.819675 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-09-27 21:41:01.819681 | orchestrator | Saturday 27 September 2025 21:36:26 +0000 (0:00:07.011) 0:06:12.381 **** 2025-09-27 21:41:01.819692 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-09-27 21:41:01.819697 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-09-27 21:41:01.819703 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-09-27 21:41:01.819708 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-09-27 21:41:01.819714 | orchestrator | 2025-09-27 21:41:01.819719 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-27 21:41:01.819740 | orchestrator | Saturday 27 September 2025 21:36:31 +0000 (0:00:05.144) 0:06:17.526 **** 2025-09-27 21:41:01.819745 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:01.819751 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:01.819756 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:01.819763 | orchestrator | 2025-09-27 21:41:01.819773 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-27 21:41:01.819782 | orchestrator | Saturday 27 September 2025 21:36:32 +0000 (0:00:00.696) 0:06:18.223 **** 2025-09-27 21:41:01.819791 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:41:01.819828 | orchestrator | 2025-09-27 21:41:01.819836 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-27 21:41:01.819841 | orchestrator | Saturday 27 September 2025 21:36:32 +0000 (0:00:00.520) 0:06:18.743 **** 2025-09-27 21:41:01.819847 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.819852 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.819858 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.819863 | orchestrator | 2025-09-27 21:41:01.819869 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-27 21:41:01.819874 | orchestrator | Saturday 27 September 2025 21:36:32 +0000 (0:00:00.300) 0:06:19.044 **** 2025-09-27 21:41:01.819879 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:01.819885 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:01.819890 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:01.819896 | orchestrator | 2025-09-27 21:41:01.819901 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-27 21:41:01.819907 | orchestrator | Saturday 27 September 2025 21:36:34 +0000 (0:00:01.482) 0:06:20.526 **** 2025-09-27 21:41:01.819912 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-27 21:41:01.819917 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-27 21:41:01.819923 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-27 21:41:01.819928 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.819934 | orchestrator | 2025-09-27 21:41:01.819939 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-27 21:41:01.819945 | orchestrator | Saturday 27 September 2025 21:36:34 +0000 (0:00:00.586) 0:06:21.113 **** 2025-09-27 21:41:01.819950 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.819955 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.819961 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.819967 | orchestrator | 2025-09-27 21:41:01.819972 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-09-27 21:41:01.819977 | orchestrator | 2025-09-27 21:41:01.819983 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-27 21:41:01.819988 | orchestrator | Saturday 27 September 2025 21:36:35 +0000 (0:00:00.626) 0:06:21.740 **** 2025-09-27 21:41:01.819994 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:01.819999 | orchestrator | 2025-09-27 21:41:01.820028 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-27 21:41:01.820034 | orchestrator | Saturday 27 September 2025 21:36:36 +0000 (0:00:00.710) 0:06:22.450 **** 2025-09-27 21:41:01.820040 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:01.820064 | orchestrator | 2025-09-27 21:41:01.820096 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-27 21:41:01.820102 | orchestrator | Saturday 27 September 2025 21:36:36 +0000 (0:00:00.552) 0:06:23.003 **** 2025-09-27 21:41:01.820107 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.820113 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.820118 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.820124 | orchestrator | 2025-09-27 21:41:01.820129 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-27 21:41:01.820135 | orchestrator | Saturday 27 September 2025 21:36:37 +0000 (0:00:00.504) 0:06:23.507 **** 2025-09-27 21:41:01.820140 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.820145 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.820151 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.820156 | orchestrator | 2025-09-27 21:41:01.820162 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-27 21:41:01.820167 | orchestrator | Saturday 27 September 2025 21:36:38 +0000 (0:00:00.720) 0:06:24.228 **** 2025-09-27 21:41:01.820172 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.820178 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.820183 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.820189 | orchestrator | 2025-09-27 21:41:01.820194 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-27 21:41:01.820200 | orchestrator | Saturday 27 September 2025 21:36:38 +0000 (0:00:00.725) 0:06:24.953 **** 2025-09-27 21:41:01.820205 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.820210 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.820216 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.820221 | orchestrator | 2025-09-27 21:41:01.820226 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-27 21:41:01.820232 | orchestrator | Saturday 27 September 2025 21:36:39 +0000 (0:00:00.673) 0:06:25.626 **** 2025-09-27 21:41:01.820237 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.820243 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.820248 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.820253 | orchestrator | 2025-09-27 21:41:01.820259 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-27 21:41:01.820264 | orchestrator | Saturday 27 September 2025 21:36:39 +0000 (0:00:00.502) 0:06:26.129 **** 2025-09-27 21:41:01.820270 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.820275 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.820280 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.820286 | orchestrator | 2025-09-27 21:41:01.820291 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-27 21:41:01.820297 | orchestrator | Saturday 27 September 2025 21:36:40 +0000 (0:00:00.314) 0:06:26.443 **** 2025-09-27 21:41:01.820306 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.820311 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.820317 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.820322 | orchestrator | 2025-09-27 21:41:01.820327 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-27 21:41:01.820333 | orchestrator | Saturday 27 September 2025 21:36:40 +0000 (0:00:00.316) 0:06:26.760 **** 2025-09-27 21:41:01.820338 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.820344 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.820349 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.820354 | orchestrator | 2025-09-27 21:41:01.820360 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-27 21:41:01.820365 | orchestrator | Saturday 27 September 2025 21:36:41 +0000 (0:00:00.759) 0:06:27.519 **** 2025-09-27 21:41:01.820371 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.820376 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.820382 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.820387 | orchestrator | 2025-09-27 21:41:01.820393 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-27 21:41:01.820402 | orchestrator | Saturday 27 September 2025 21:36:42 +0000 (0:00:00.710) 0:06:28.230 **** 2025-09-27 21:41:01.820407 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.820413 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.820431 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.820437 | orchestrator | 2025-09-27 21:41:01.820442 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-27 21:41:01.820448 | orchestrator | Saturday 27 September 2025 21:36:42 +0000 (0:00:00.527) 0:06:28.757 **** 2025-09-27 21:41:01.820453 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.820459 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.820464 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.820469 | orchestrator | 2025-09-27 21:41:01.820475 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-27 21:41:01.820480 | orchestrator | Saturday 27 September 2025 21:36:42 +0000 (0:00:00.297) 0:06:29.055 **** 2025-09-27 21:41:01.820485 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.820491 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.820496 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.820502 | orchestrator | 2025-09-27 21:41:01.820507 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-27 21:41:01.820513 | orchestrator | Saturday 27 September 2025 21:36:43 +0000 (0:00:00.319) 0:06:29.374 **** 2025-09-27 21:41:01.820518 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.820523 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.820529 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.820534 | orchestrator | 2025-09-27 21:41:01.820540 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-27 21:41:01.820545 | orchestrator | Saturday 27 September 2025 21:36:43 +0000 (0:00:00.306) 0:06:29.681 **** 2025-09-27 21:41:01.820550 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.820556 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.820561 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.820567 | orchestrator | 2025-09-27 21:41:01.820575 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-27 21:41:01.820581 | orchestrator | Saturday 27 September 2025 21:36:44 +0000 (0:00:00.553) 0:06:30.235 **** 2025-09-27 21:41:01.820587 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.820592 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.820597 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.820603 | orchestrator | 2025-09-27 21:41:01.820608 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-27 21:41:01.820613 | orchestrator | Saturday 27 September 2025 21:36:44 +0000 (0:00:00.310) 0:06:30.546 **** 2025-09-27 21:41:01.820619 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.820624 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.820630 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.820635 | orchestrator | 2025-09-27 21:41:01.820640 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-27 21:41:01.820646 | orchestrator | Saturday 27 September 2025 21:36:44 +0000 (0:00:00.313) 0:06:30.859 **** 2025-09-27 21:41:01.820651 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.820656 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.820662 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.820667 | orchestrator | 2025-09-27 21:41:01.820672 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-27 21:41:01.820678 | orchestrator | Saturday 27 September 2025 21:36:44 +0000 (0:00:00.286) 0:06:31.145 **** 2025-09-27 21:41:01.820683 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.820688 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.820694 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.820699 | orchestrator | 2025-09-27 21:41:01.820704 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-27 21:41:01.820710 | orchestrator | Saturday 27 September 2025 21:36:45 +0000 (0:00:00.564) 0:06:31.710 **** 2025-09-27 21:41:01.820719 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.820724 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.820730 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.820735 | orchestrator | 2025-09-27 21:41:01.820740 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-09-27 21:41:01.820746 | orchestrator | Saturday 27 September 2025 21:36:46 +0000 (0:00:00.543) 0:06:32.253 **** 2025-09-27 21:41:01.820751 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.820757 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.820763 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.820773 | orchestrator | 2025-09-27 21:41:01.820783 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-09-27 21:41:01.820792 | orchestrator | Saturday 27 September 2025 21:36:46 +0000 (0:00:00.293) 0:06:32.547 **** 2025-09-27 21:41:01.820801 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-27 21:41:01.820810 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-27 21:41:01.820819 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-27 21:41:01.820830 | orchestrator | 2025-09-27 21:41:01.820840 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-09-27 21:41:01.820853 | orchestrator | Saturday 27 September 2025 21:36:47 +0000 (0:00:01.065) 0:06:33.612 **** 2025-09-27 21:41:01.820859 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:01.820865 | orchestrator | 2025-09-27 21:41:01.820870 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-09-27 21:41:01.820875 | orchestrator | Saturday 27 September 2025 21:36:47 +0000 (0:00:00.500) 0:06:34.113 **** 2025-09-27 21:41:01.820881 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.820886 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.820891 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.820897 | orchestrator | 2025-09-27 21:41:01.820902 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-09-27 21:41:01.820908 | orchestrator | Saturday 27 September 2025 21:36:48 +0000 (0:00:00.276) 0:06:34.390 **** 2025-09-27 21:41:01.820913 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.820918 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.820923 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.820929 | orchestrator | 2025-09-27 21:41:01.820934 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-09-27 21:41:01.820940 | orchestrator | Saturday 27 September 2025 21:36:48 +0000 (0:00:00.502) 0:06:34.892 **** 2025-09-27 21:41:01.820945 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.820950 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.820956 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.820961 | orchestrator | 2025-09-27 21:41:01.820966 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-09-27 21:41:01.820972 | orchestrator | Saturday 27 September 2025 21:36:49 +0000 (0:00:00.641) 0:06:35.534 **** 2025-09-27 21:41:01.820977 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.820982 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.820988 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.820993 | orchestrator | 2025-09-27 21:41:01.820999 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-09-27 21:41:01.821004 | orchestrator | Saturday 27 September 2025 21:36:49 +0000 (0:00:00.371) 0:06:35.905 **** 2025-09-27 21:41:01.821010 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-27 21:41:01.821015 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-27 21:41:01.821021 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-27 21:41:01.821030 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-27 21:41:01.821035 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-27 21:41:01.821041 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-27 21:41:01.821079 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-27 21:41:01.821086 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-27 21:41:01.821091 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-27 21:41:01.821097 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-27 21:41:01.821102 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-27 21:41:01.821108 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-27 21:41:01.821113 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-27 21:41:01.821118 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-27 21:41:01.821124 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-27 21:41:01.821129 | orchestrator | 2025-09-27 21:41:01.821135 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-09-27 21:41:01.821140 | orchestrator | Saturday 27 September 2025 21:36:54 +0000 (0:00:05.153) 0:06:41.058 **** 2025-09-27 21:41:01.821145 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.821151 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.821156 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.821162 | orchestrator | 2025-09-27 21:41:01.821167 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-09-27 21:41:01.821173 | orchestrator | Saturday 27 September 2025 21:36:55 +0000 (0:00:00.516) 0:06:41.575 **** 2025-09-27 21:41:01.821178 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:01.821183 | orchestrator | 2025-09-27 21:41:01.821189 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-09-27 21:41:01.821194 | orchestrator | Saturday 27 September 2025 21:36:55 +0000 (0:00:00.509) 0:06:42.085 **** 2025-09-27 21:41:01.821200 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-27 21:41:01.821205 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-27 21:41:01.821210 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-27 21:41:01.821216 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-09-27 21:41:01.821221 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-09-27 21:41:01.821227 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-09-27 21:41:01.821232 | orchestrator | 2025-09-27 21:41:01.821237 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-09-27 21:41:01.821243 | orchestrator | Saturday 27 September 2025 21:36:56 +0000 (0:00:01.036) 0:06:43.121 **** 2025-09-27 21:41:01.821251 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 21:41:01.821257 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-27 21:41:01.821262 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-27 21:41:01.821268 | orchestrator | 2025-09-27 21:41:01.821273 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-09-27 21:41:01.821278 | orchestrator | Saturday 27 September 2025 21:36:59 +0000 (0:00:02.249) 0:06:45.371 **** 2025-09-27 21:41:01.821284 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-27 21:41:01.821289 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-27 21:41:01.821295 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:01.821305 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-27 21:41:01.821310 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-27 21:41:01.821315 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:01.821321 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-27 21:41:01.821326 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-27 21:41:01.821332 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:01.821337 | orchestrator | 2025-09-27 21:41:01.821343 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-09-27 21:41:01.821348 | orchestrator | Saturday 27 September 2025 21:37:00 +0000 (0:00:01.540) 0:06:46.911 **** 2025-09-27 21:41:01.821354 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-27 21:41:01.821358 | orchestrator | 2025-09-27 21:41:01.821363 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-09-27 21:41:01.821368 | orchestrator | Saturday 27 September 2025 21:37:02 +0000 (0:00:02.255) 0:06:49.167 **** 2025-09-27 21:41:01.821373 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:01.821378 | orchestrator | 2025-09-27 21:41:01.821382 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-09-27 21:41:01.821387 | orchestrator | Saturday 27 September 2025 21:37:03 +0000 (0:00:00.517) 0:06:49.684 **** 2025-09-27 21:41:01.821392 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c2ef8475-4f12-50de-ab79-c841a7bfbe3d', 'data_vg': 'ceph-c2ef8475-4f12-50de-ab79-c841a7bfbe3d'}) 2025-09-27 21:41:01.821398 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5f61d8e2-65b7-57ca-8dcb-2a964e525246', 'data_vg': 'ceph-5f61d8e2-65b7-57ca-8dcb-2a964e525246'}) 2025-09-27 21:41:01.821403 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-de74169a-f069-5642-ad17-f2f17c514bb2', 'data_vg': 'ceph-de74169a-f069-5642-ad17-f2f17c514bb2'}) 2025-09-27 21:41:01.821410 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2897d5b9-8afd-5dc0-8795-bd1d3af2960f', 'data_vg': 'ceph-2897d5b9-8afd-5dc0-8795-bd1d3af2960f'}) 2025-09-27 21:41:01.821415 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9', 'data_vg': 'ceph-e5968580-5dd1-5a87-a5e5-bc9ba69f72d9'}) 2025-09-27 21:41:01.821420 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-364a105c-f104-5917-80d0-e8f8560ea5f8', 'data_vg': 'ceph-364a105c-f104-5917-80d0-e8f8560ea5f8'}) 2025-09-27 21:41:01.821425 | orchestrator | 2025-09-27 21:41:01.821430 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-09-27 21:41:01.821435 | orchestrator | Saturday 27 September 2025 21:37:46 +0000 (0:00:42.774) 0:07:32.459 **** 2025-09-27 21:41:01.821439 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.821444 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.821449 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.821454 | orchestrator | 2025-09-27 21:41:01.821459 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-09-27 21:41:01.821463 | orchestrator | Saturday 27 September 2025 21:37:46 +0000 (0:00:00.524) 0:07:32.983 **** 2025-09-27 21:41:01.821468 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:01.821473 | orchestrator | 2025-09-27 21:41:01.821478 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-09-27 21:41:01.821483 | orchestrator | Saturday 27 September 2025 21:37:47 +0000 (0:00:00.570) 0:07:33.554 **** 2025-09-27 21:41:01.821488 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.821492 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.821497 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.821502 | orchestrator | 2025-09-27 21:41:01.821507 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-09-27 21:41:01.821512 | orchestrator | Saturday 27 September 2025 21:37:48 +0000 (0:00:00.714) 0:07:34.269 **** 2025-09-27 21:41:01.821520 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.821525 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.821529 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.821534 | orchestrator | 2025-09-27 21:41:01.821539 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-09-27 21:41:01.821544 | orchestrator | Saturday 27 September 2025 21:37:50 +0000 (0:00:02.891) 0:07:37.160 **** 2025-09-27 21:41:01.821548 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:01.821553 | orchestrator | 2025-09-27 21:41:01.821558 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-09-27 21:41:01.821563 | orchestrator | Saturday 27 September 2025 21:37:51 +0000 (0:00:00.534) 0:07:37.694 **** 2025-09-27 21:41:01.821568 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:01.821572 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:01.821577 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:01.821582 | orchestrator | 2025-09-27 21:41:01.821589 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-09-27 21:41:01.821594 | orchestrator | Saturday 27 September 2025 21:37:52 +0000 (0:00:01.215) 0:07:38.910 **** 2025-09-27 21:41:01.821598 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:01.821603 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:01.821608 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:01.821613 | orchestrator | 2025-09-27 21:41:01.821617 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-09-27 21:41:01.821622 | orchestrator | Saturday 27 September 2025 21:37:54 +0000 (0:00:01.429) 0:07:40.339 **** 2025-09-27 21:41:01.821627 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:01.821632 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:01.821637 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:01.821641 | orchestrator | 2025-09-27 21:41:01.821646 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-09-27 21:41:01.821651 | orchestrator | Saturday 27 September 2025 21:37:55 +0000 (0:00:01.675) 0:07:42.014 **** 2025-09-27 21:41:01.821656 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.821660 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.821665 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.821670 | orchestrator | 2025-09-27 21:41:01.821675 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-09-27 21:41:01.821680 | orchestrator | Saturday 27 September 2025 21:37:56 +0000 (0:00:00.342) 0:07:42.357 **** 2025-09-27 21:41:01.821684 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.821689 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.821694 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.821699 | orchestrator | 2025-09-27 21:41:01.821703 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-09-27 21:41:01.821708 | orchestrator | Saturday 27 September 2025 21:37:56 +0000 (0:00:00.317) 0:07:42.675 **** 2025-09-27 21:41:01.821713 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-27 21:41:01.821718 | orchestrator | ok: [testbed-node-3] => (item=3) 2025-09-27 21:41:01.821722 | orchestrator | ok: [testbed-node-4] => (item=5) 2025-09-27 21:41:01.821727 | orchestrator | ok: [testbed-node-5] => (item=4) 2025-09-27 21:41:01.821732 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-09-27 21:41:01.821737 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-09-27 21:41:01.821741 | orchestrator | 2025-09-27 21:41:01.821746 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-09-27 21:41:01.821751 | orchestrator | Saturday 27 September 2025 21:37:57 +0000 (0:00:01.361) 0:07:44.036 **** 2025-09-27 21:41:01.821756 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-27 21:41:01.821761 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-09-27 21:41:01.821766 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-09-27 21:41:01.821770 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-09-27 21:41:01.821778 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-09-27 21:41:01.821783 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-09-27 21:41:01.821787 | orchestrator | 2025-09-27 21:41:01.821795 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-09-27 21:41:01.821800 | orchestrator | Saturday 27 September 2025 21:38:00 +0000 (0:00:02.177) 0:07:46.214 **** 2025-09-27 21:41:01.821804 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-09-27 21:41:01.821809 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-27 21:41:01.821814 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-09-27 21:41:01.821819 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-09-27 21:41:01.821823 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-09-27 21:41:01.821828 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-09-27 21:41:01.821833 | orchestrator | 2025-09-27 21:41:01.821838 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-09-27 21:41:01.821843 | orchestrator | Saturday 27 September 2025 21:38:03 +0000 (0:00:03.640) 0:07:49.855 **** 2025-09-27 21:41:01.821847 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.821852 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.821857 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-27 21:41:01.821862 | orchestrator | 2025-09-27 21:41:01.821867 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-09-27 21:41:01.821871 | orchestrator | Saturday 27 September 2025 21:38:06 +0000 (0:00:02.809) 0:07:52.664 **** 2025-09-27 21:41:01.821876 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.821881 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.821886 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-09-27 21:41:01.821891 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-27 21:41:01.821896 | orchestrator | 2025-09-27 21:41:01.821900 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-09-27 21:41:01.821905 | orchestrator | Saturday 27 September 2025 21:38:19 +0000 (0:00:12.764) 0:08:05.429 **** 2025-09-27 21:41:01.821910 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.821915 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.821920 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.821924 | orchestrator | 2025-09-27 21:41:01.821929 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-27 21:41:01.821934 | orchestrator | Saturday 27 September 2025 21:38:20 +0000 (0:00:00.793) 0:08:06.222 **** 2025-09-27 21:41:01.821939 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.821944 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.821948 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.821953 | orchestrator | 2025-09-27 21:41:01.821958 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-27 21:41:01.821963 | orchestrator | Saturday 27 September 2025 21:38:20 +0000 (0:00:00.545) 0:08:06.767 **** 2025-09-27 21:41:01.821967 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:01.821972 | orchestrator | 2025-09-27 21:41:01.821977 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-27 21:41:01.821985 | orchestrator | Saturday 27 September 2025 21:38:21 +0000 (0:00:00.514) 0:08:07.281 **** 2025-09-27 21:41:01.821989 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-27 21:41:01.821994 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-27 21:41:01.821999 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-27 21:41:01.822004 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.822009 | orchestrator | 2025-09-27 21:41:01.822029 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-27 21:41:01.822035 | orchestrator | Saturday 27 September 2025 21:38:21 +0000 (0:00:00.399) 0:08:07.681 **** 2025-09-27 21:41:01.822043 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.822056 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.822061 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.822066 | orchestrator | 2025-09-27 21:41:01.822071 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-27 21:41:01.822075 | orchestrator | Saturday 27 September 2025 21:38:22 +0000 (0:00:00.528) 0:08:08.209 **** 2025-09-27 21:41:01.822080 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.822085 | orchestrator | 2025-09-27 21:41:01.822090 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-27 21:41:01.822095 | orchestrator | Saturday 27 September 2025 21:38:22 +0000 (0:00:00.224) 0:08:08.433 **** 2025-09-27 21:41:01.822099 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.822104 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.822109 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.822114 | orchestrator | 2025-09-27 21:41:01.822119 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-27 21:41:01.822123 | orchestrator | Saturday 27 September 2025 21:38:22 +0000 (0:00:00.335) 0:08:08.769 **** 2025-09-27 21:41:01.822128 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.822133 | orchestrator | 2025-09-27 21:41:01.822138 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-27 21:41:01.822142 | orchestrator | Saturday 27 September 2025 21:38:22 +0000 (0:00:00.234) 0:08:09.003 **** 2025-09-27 21:41:01.822147 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.822152 | orchestrator | 2025-09-27 21:41:01.822157 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-27 21:41:01.822162 | orchestrator | Saturday 27 September 2025 21:38:23 +0000 (0:00:00.219) 0:08:09.223 **** 2025-09-27 21:41:01.822166 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.822171 | orchestrator | 2025-09-27 21:41:01.822176 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-27 21:41:01.822181 | orchestrator | Saturday 27 September 2025 21:38:23 +0000 (0:00:00.115) 0:08:09.339 **** 2025-09-27 21:41:01.822186 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.822190 | orchestrator | 2025-09-27 21:41:01.822195 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-27 21:41:01.822200 | orchestrator | Saturday 27 September 2025 21:38:23 +0000 (0:00:00.232) 0:08:09.571 **** 2025-09-27 21:41:01.822208 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.822213 | orchestrator | 2025-09-27 21:41:01.822218 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-27 21:41:01.822223 | orchestrator | Saturday 27 September 2025 21:38:23 +0000 (0:00:00.219) 0:08:09.791 **** 2025-09-27 21:41:01.822227 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-27 21:41:01.822232 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-27 21:41:01.822237 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-27 21:41:01.822242 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.822246 | orchestrator | 2025-09-27 21:41:01.822251 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-27 21:41:01.822256 | orchestrator | Saturday 27 September 2025 21:38:23 +0000 (0:00:00.369) 0:08:10.161 **** 2025-09-27 21:41:01.822261 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.822266 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.822270 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.822275 | orchestrator | 2025-09-27 21:41:01.822280 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-27 21:41:01.822285 | orchestrator | Saturday 27 September 2025 21:38:24 +0000 (0:00:00.564) 0:08:10.725 **** 2025-09-27 21:41:01.822290 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.822294 | orchestrator | 2025-09-27 21:41:01.822299 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-27 21:41:01.822307 | orchestrator | Saturday 27 September 2025 21:38:24 +0000 (0:00:00.237) 0:08:10.963 **** 2025-09-27 21:41:01.822312 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.822316 | orchestrator | 2025-09-27 21:41:01.822321 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-09-27 21:41:01.822326 | orchestrator | 2025-09-27 21:41:01.822331 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-27 21:41:01.822336 | orchestrator | Saturday 27 September 2025 21:38:25 +0000 (0:00:00.668) 0:08:11.631 **** 2025-09-27 21:41:01.822341 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:01.822346 | orchestrator | 2025-09-27 21:41:01.822351 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-27 21:41:01.822356 | orchestrator | Saturday 27 September 2025 21:38:26 +0000 (0:00:01.207) 0:08:12.839 **** 2025-09-27 21:41:01.822360 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:01.822365 | orchestrator | 2025-09-27 21:41:01.822370 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-27 21:41:01.822375 | orchestrator | Saturday 27 September 2025 21:38:27 +0000 (0:00:01.269) 0:08:14.108 **** 2025-09-27 21:41:01.822380 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.822389 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.822394 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.822398 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.822403 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.822408 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.822413 | orchestrator | 2025-09-27 21:41:01.822418 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-27 21:41:01.822422 | orchestrator | Saturday 27 September 2025 21:38:28 +0000 (0:00:01.048) 0:08:15.156 **** 2025-09-27 21:41:01.822427 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.822432 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.822437 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.822442 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.822446 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.822451 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.822456 | orchestrator | 2025-09-27 21:41:01.822461 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-27 21:41:01.822465 | orchestrator | Saturday 27 September 2025 21:38:29 +0000 (0:00:00.961) 0:08:16.117 **** 2025-09-27 21:41:01.822470 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.822475 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.822480 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.822484 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.822489 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.822494 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.822499 | orchestrator | 2025-09-27 21:41:01.822504 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-27 21:41:01.822508 | orchestrator | Saturday 27 September 2025 21:38:31 +0000 (0:00:01.069) 0:08:17.187 **** 2025-09-27 21:41:01.822513 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.822518 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.822523 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.822528 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.822533 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.822537 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.822542 | orchestrator | 2025-09-27 21:41:01.822547 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-27 21:41:01.822552 | orchestrator | Saturday 27 September 2025 21:38:31 +0000 (0:00:00.910) 0:08:18.097 **** 2025-09-27 21:41:01.822556 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.822564 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.822569 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.822574 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.822578 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.822583 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.822588 | orchestrator | 2025-09-27 21:41:01.822593 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-27 21:41:01.822598 | orchestrator | Saturday 27 September 2025 21:38:32 +0000 (0:00:00.891) 0:08:18.989 **** 2025-09-27 21:41:01.822602 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.822607 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.822612 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.822617 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.822624 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.822629 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.822634 | orchestrator | 2025-09-27 21:41:01.822638 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-27 21:41:01.822643 | orchestrator | Saturday 27 September 2025 21:38:33 +0000 (0:00:00.570) 0:08:19.559 **** 2025-09-27 21:41:01.822648 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.822653 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.822658 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.822662 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.822667 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.822672 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.822677 | orchestrator | 2025-09-27 21:41:01.822681 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-27 21:41:01.822686 | orchestrator | Saturday 27 September 2025 21:38:33 +0000 (0:00:00.621) 0:08:20.181 **** 2025-09-27 21:41:01.822691 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.822696 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.822701 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.822705 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.822710 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.822715 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.822720 | orchestrator | 2025-09-27 21:41:01.822725 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-27 21:41:01.822729 | orchestrator | Saturday 27 September 2025 21:38:34 +0000 (0:00:00.949) 0:08:21.131 **** 2025-09-27 21:41:01.822734 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.822739 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.822744 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.822749 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.822753 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.822758 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.822763 | orchestrator | 2025-09-27 21:41:01.822768 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-27 21:41:01.822773 | orchestrator | Saturday 27 September 2025 21:38:36 +0000 (0:00:01.089) 0:08:22.221 **** 2025-09-27 21:41:01.822777 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.822782 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.822787 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.822792 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.822796 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.822801 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.822806 | orchestrator | 2025-09-27 21:41:01.822811 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-27 21:41:01.822816 | orchestrator | Saturday 27 September 2025 21:38:36 +0000 (0:00:00.499) 0:08:22.721 **** 2025-09-27 21:41:01.822820 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.822826 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.822834 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.822842 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.822850 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.822862 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.822870 | orchestrator | 2025-09-27 21:41:01.822878 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-27 21:41:01.822886 | orchestrator | Saturday 27 September 2025 21:38:37 +0000 (0:00:00.516) 0:08:23.237 **** 2025-09-27 21:41:01.822898 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.822907 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.822915 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.822923 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.822929 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.822933 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.822938 | orchestrator | 2025-09-27 21:41:01.822943 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-27 21:41:01.822948 | orchestrator | Saturday 27 September 2025 21:38:37 +0000 (0:00:00.660) 0:08:23.897 **** 2025-09-27 21:41:01.822953 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.822957 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.822962 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.822967 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.822972 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.822977 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.822981 | orchestrator | 2025-09-27 21:41:01.822986 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-27 21:41:01.822991 | orchestrator | Saturday 27 September 2025 21:38:38 +0000 (0:00:00.514) 0:08:24.412 **** 2025-09-27 21:41:01.822996 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.823001 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.823005 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.823010 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.823015 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.823020 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.823024 | orchestrator | 2025-09-27 21:41:01.823029 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-27 21:41:01.823034 | orchestrator | Saturday 27 September 2025 21:38:38 +0000 (0:00:00.694) 0:08:25.106 **** 2025-09-27 21:41:01.823039 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.823044 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.823063 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.823068 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.823073 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.823078 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.823083 | orchestrator | 2025-09-27 21:41:01.823088 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-27 21:41:01.823092 | orchestrator | Saturday 27 September 2025 21:38:39 +0000 (0:00:00.502) 0:08:25.609 **** 2025-09-27 21:41:01.823097 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:01.823102 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:01.823107 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:01.823111 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.823116 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.823121 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.823126 | orchestrator | 2025-09-27 21:41:01.823131 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-27 21:41:01.823136 | orchestrator | Saturday 27 September 2025 21:38:40 +0000 (0:00:00.642) 0:08:26.251 **** 2025-09-27 21:41:01.823140 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.823145 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.823150 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.823158 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.823163 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.823168 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.823173 | orchestrator | 2025-09-27 21:41:01.823177 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-27 21:41:01.823182 | orchestrator | Saturday 27 September 2025 21:38:40 +0000 (0:00:00.531) 0:08:26.783 **** 2025-09-27 21:41:01.823191 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.823195 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.823200 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.823205 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.823210 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.823215 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.823220 | orchestrator | 2025-09-27 21:41:01.823224 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-27 21:41:01.823229 | orchestrator | Saturday 27 September 2025 21:38:41 +0000 (0:00:00.595) 0:08:27.379 **** 2025-09-27 21:41:01.823234 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.823239 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.823244 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.823248 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.823253 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.823258 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.823263 | orchestrator | 2025-09-27 21:41:01.823268 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-09-27 21:41:01.823273 | orchestrator | Saturday 27 September 2025 21:38:42 +0000 (0:00:00.869) 0:08:28.248 **** 2025-09-27 21:41:01.823277 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:01.823282 | orchestrator | 2025-09-27 21:41:01.823287 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-09-27 21:41:01.823292 | orchestrator | Saturday 27 September 2025 21:38:46 +0000 (0:00:03.998) 0:08:32.247 **** 2025-09-27 21:41:01.823297 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.823302 | orchestrator | 2025-09-27 21:41:01.823306 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-09-27 21:41:01.823311 | orchestrator | Saturday 27 September 2025 21:38:47 +0000 (0:00:01.843) 0:08:34.090 **** 2025-09-27 21:41:01.823316 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.823321 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:01.823326 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:01.823331 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:01.823335 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:01.823340 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:01.823345 | orchestrator | 2025-09-27 21:41:01.823350 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-09-27 21:41:01.823355 | orchestrator | Saturday 27 September 2025 21:38:49 +0000 (0:00:01.427) 0:08:35.518 **** 2025-09-27 21:41:01.823360 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:01.823365 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:01.823369 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:01.823374 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:01.823379 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:01.823384 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:01.823389 | orchestrator | 2025-09-27 21:41:01.823393 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-09-27 21:41:01.823401 | orchestrator | Saturday 27 September 2025 21:38:50 +0000 (0:00:00.953) 0:08:36.471 **** 2025-09-27 21:41:01.823406 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:01.823411 | orchestrator | 2025-09-27 21:41:01.823416 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-09-27 21:41:01.823421 | orchestrator | Saturday 27 September 2025 21:38:51 +0000 (0:00:01.081) 0:08:37.552 **** 2025-09-27 21:41:01.823426 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:01.823430 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:01.823435 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:01.823440 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:01.823445 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:01.823449 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:01.823454 | orchestrator | 2025-09-27 21:41:01.823462 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-09-27 21:41:01.823467 | orchestrator | Saturday 27 September 2025 21:38:52 +0000 (0:00:01.345) 0:08:38.898 **** 2025-09-27 21:41:01.823472 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:01.823477 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:01.823481 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:01.823486 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:01.823491 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:01.823496 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:01.823501 | orchestrator | 2025-09-27 21:41:01.823505 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-09-27 21:41:01.823510 | orchestrator | Saturday 27 September 2025 21:38:55 +0000 (0:00:03.136) 0:08:42.035 **** 2025-09-27 21:41:01.823515 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:01.823520 | orchestrator | 2025-09-27 21:41:01.823525 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-09-27 21:41:01.823530 | orchestrator | Saturday 27 September 2025 21:38:57 +0000 (0:00:01.156) 0:08:43.192 **** 2025-09-27 21:41:01.823535 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.823539 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.823544 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.823549 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.823554 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.823559 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.823563 | orchestrator | 2025-09-27 21:41:01.823568 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-09-27 21:41:01.823573 | orchestrator | Saturday 27 September 2025 21:38:57 +0000 (0:00:00.606) 0:08:43.799 **** 2025-09-27 21:41:01.823578 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:01.823583 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:01.823588 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:01.823592 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:01.823597 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:01.823604 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:01.823609 | orchestrator | 2025-09-27 21:41:01.823614 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-09-27 21:41:01.823619 | orchestrator | Saturday 27 September 2025 21:39:00 +0000 (0:00:02.714) 0:08:46.514 **** 2025-09-27 21:41:01.823624 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:01.823629 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:01.823634 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.823638 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:01.823643 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.823648 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.823653 | orchestrator | 2025-09-27 21:41:01.823658 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-09-27 21:41:01.823663 | orchestrator | 2025-09-27 21:41:01.823667 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-27 21:41:01.823672 | orchestrator | Saturday 27 September 2025 21:39:01 +0000 (0:00:01.316) 0:08:47.830 **** 2025-09-27 21:41:01.823677 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:01.823682 | orchestrator | 2025-09-27 21:41:01.823687 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-27 21:41:01.823691 | orchestrator | Saturday 27 September 2025 21:39:02 +0000 (0:00:00.527) 0:08:48.358 **** 2025-09-27 21:41:01.823696 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:01.823701 | orchestrator | 2025-09-27 21:41:01.823706 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-27 21:41:01.823711 | orchestrator | Saturday 27 September 2025 21:39:02 +0000 (0:00:00.702) 0:08:49.060 **** 2025-09-27 21:41:01.823719 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.823724 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.823728 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.823733 | orchestrator | 2025-09-27 21:41:01.823738 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-27 21:41:01.823743 | orchestrator | Saturday 27 September 2025 21:39:03 +0000 (0:00:00.303) 0:08:49.363 **** 2025-09-27 21:41:01.823748 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.823753 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.823757 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.823763 | orchestrator | 2025-09-27 21:41:01.823772 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-27 21:41:01.823780 | orchestrator | Saturday 27 September 2025 21:39:03 +0000 (0:00:00.654) 0:08:50.017 **** 2025-09-27 21:41:01.823787 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.823795 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.823804 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.823812 | orchestrator | 2025-09-27 21:41:01.823820 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-27 21:41:01.823826 | orchestrator | Saturday 27 September 2025 21:39:04 +0000 (0:00:00.646) 0:08:50.664 **** 2025-09-27 21:41:01.823831 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.823836 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.823843 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.823848 | orchestrator | 2025-09-27 21:41:01.823853 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-27 21:41:01.823858 | orchestrator | Saturday 27 September 2025 21:39:05 +0000 (0:00:00.870) 0:08:51.534 **** 2025-09-27 21:41:01.823863 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.823868 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.823872 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.823877 | orchestrator | 2025-09-27 21:41:01.823882 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-27 21:41:01.823887 | orchestrator | Saturday 27 September 2025 21:39:05 +0000 (0:00:00.252) 0:08:51.786 **** 2025-09-27 21:41:01.823892 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.823896 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.823901 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.823906 | orchestrator | 2025-09-27 21:41:01.823911 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-27 21:41:01.823916 | orchestrator | Saturday 27 September 2025 21:39:05 +0000 (0:00:00.237) 0:08:52.024 **** 2025-09-27 21:41:01.823920 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.823925 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.823930 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.823935 | orchestrator | 2025-09-27 21:41:01.823940 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-27 21:41:01.823944 | orchestrator | Saturday 27 September 2025 21:39:06 +0000 (0:00:00.256) 0:08:52.280 **** 2025-09-27 21:41:01.823949 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.823954 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.823959 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.823964 | orchestrator | 2025-09-27 21:41:01.823969 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-27 21:41:01.823973 | orchestrator | Saturday 27 September 2025 21:39:07 +0000 (0:00:00.953) 0:08:53.233 **** 2025-09-27 21:41:01.823978 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.823983 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.823988 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.823993 | orchestrator | 2025-09-27 21:41:01.823997 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-27 21:41:01.824002 | orchestrator | Saturday 27 September 2025 21:39:07 +0000 (0:00:00.688) 0:08:53.921 **** 2025-09-27 21:41:01.824007 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.824015 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.824020 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.824025 | orchestrator | 2025-09-27 21:41:01.824029 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-27 21:41:01.824034 | orchestrator | Saturday 27 September 2025 21:39:08 +0000 (0:00:00.345) 0:08:54.267 **** 2025-09-27 21:41:01.824039 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.824044 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.824059 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.824065 | orchestrator | 2025-09-27 21:41:01.824073 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-27 21:41:01.824078 | orchestrator | Saturday 27 September 2025 21:39:08 +0000 (0:00:00.266) 0:08:54.534 **** 2025-09-27 21:41:01.824083 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.824088 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.824093 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.824098 | orchestrator | 2025-09-27 21:41:01.824102 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-27 21:41:01.824107 | orchestrator | Saturday 27 September 2025 21:39:08 +0000 (0:00:00.478) 0:08:55.012 **** 2025-09-27 21:41:01.824112 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.824117 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.824122 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.824127 | orchestrator | 2025-09-27 21:41:01.824131 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-27 21:41:01.824136 | orchestrator | Saturday 27 September 2025 21:39:09 +0000 (0:00:00.290) 0:08:55.303 **** 2025-09-27 21:41:01.824141 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.824146 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.824150 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.824155 | orchestrator | 2025-09-27 21:41:01.824160 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-27 21:41:01.824165 | orchestrator | Saturday 27 September 2025 21:39:09 +0000 (0:00:00.306) 0:08:55.610 **** 2025-09-27 21:41:01.824170 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.824174 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.824179 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.824184 | orchestrator | 2025-09-27 21:41:01.824189 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-27 21:41:01.824193 | orchestrator | Saturday 27 September 2025 21:39:09 +0000 (0:00:00.261) 0:08:55.871 **** 2025-09-27 21:41:01.824198 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.824203 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.824208 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.824213 | orchestrator | 2025-09-27 21:41:01.824217 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-27 21:41:01.824222 | orchestrator | Saturday 27 September 2025 21:39:10 +0000 (0:00:00.460) 0:08:56.332 **** 2025-09-27 21:41:01.824227 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.824232 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.824236 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.824241 | orchestrator | 2025-09-27 21:41:01.824246 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-27 21:41:01.824251 | orchestrator | Saturday 27 September 2025 21:39:10 +0000 (0:00:00.364) 0:08:56.696 **** 2025-09-27 21:41:01.824256 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.824260 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.824265 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.824270 | orchestrator | 2025-09-27 21:41:01.824275 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-27 21:41:01.824280 | orchestrator | Saturday 27 September 2025 21:39:10 +0000 (0:00:00.392) 0:08:57.089 **** 2025-09-27 21:41:01.824284 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.824289 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.824294 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.824302 | orchestrator | 2025-09-27 21:41:01.824309 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-09-27 21:41:01.824314 | orchestrator | Saturday 27 September 2025 21:39:11 +0000 (0:00:01.046) 0:08:58.136 **** 2025-09-27 21:41:01.824319 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.824324 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.824328 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-09-27 21:41:01.824333 | orchestrator | 2025-09-27 21:41:01.824338 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-09-27 21:41:01.824343 | orchestrator | Saturday 27 September 2025 21:39:12 +0000 (0:00:00.465) 0:08:58.602 **** 2025-09-27 21:41:01.824348 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-27 21:41:01.824352 | orchestrator | 2025-09-27 21:41:01.824357 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-09-27 21:41:01.824362 | orchestrator | Saturday 27 September 2025 21:39:14 +0000 (0:00:02.092) 0:09:00.695 **** 2025-09-27 21:41:01.824367 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-09-27 21:41:01.824373 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.824378 | orchestrator | 2025-09-27 21:41:01.824383 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-09-27 21:41:01.824387 | orchestrator | Saturday 27 September 2025 21:39:14 +0000 (0:00:00.174) 0:09:00.869 **** 2025-09-27 21:41:01.824393 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-27 21:41:01.824402 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-27 21:41:01.824407 | orchestrator | 2025-09-27 21:41:01.824412 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-09-27 21:41:01.824416 | orchestrator | Saturday 27 September 2025 21:39:22 +0000 (0:00:08.305) 0:09:09.174 **** 2025-09-27 21:41:01.824421 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-27 21:41:01.824426 | orchestrator | 2025-09-27 21:41:01.824431 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-09-27 21:41:01.824438 | orchestrator | Saturday 27 September 2025 21:39:26 +0000 (0:00:03.853) 0:09:13.028 **** 2025-09-27 21:41:01.824443 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:01.824448 | orchestrator | 2025-09-27 21:41:01.824453 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-09-27 21:41:01.824458 | orchestrator | Saturday 27 September 2025 21:39:27 +0000 (0:00:00.763) 0:09:13.791 **** 2025-09-27 21:41:01.824462 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-27 21:41:01.824467 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-27 21:41:01.824472 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-27 21:41:01.824477 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-09-27 21:41:01.824482 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-09-27 21:41:01.824486 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-09-27 21:41:01.824491 | orchestrator | 2025-09-27 21:41:01.824496 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-09-27 21:41:01.824504 | orchestrator | Saturday 27 September 2025 21:39:28 +0000 (0:00:01.057) 0:09:14.849 **** 2025-09-27 21:41:01.824509 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 21:41:01.824513 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-27 21:41:01.824518 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-27 21:41:01.824523 | orchestrator | 2025-09-27 21:41:01.824528 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-09-27 21:41:01.824533 | orchestrator | Saturday 27 September 2025 21:39:30 +0000 (0:00:02.274) 0:09:17.124 **** 2025-09-27 21:41:01.824538 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-27 21:41:01.824543 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-27 21:41:01.824547 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:01.824552 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-27 21:41:01.824557 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-27 21:41:01.824562 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:01.824566 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-27 21:41:01.824571 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-27 21:41:01.824576 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:01.824581 | orchestrator | 2025-09-27 21:41:01.824585 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-09-27 21:41:01.824590 | orchestrator | Saturday 27 September 2025 21:39:32 +0000 (0:00:01.224) 0:09:18.349 **** 2025-09-27 21:41:01.824595 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:01.824600 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:01.824605 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:01.824609 | orchestrator | 2025-09-27 21:41:01.824616 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-09-27 21:41:01.824621 | orchestrator | Saturday 27 September 2025 21:39:35 +0000 (0:00:02.966) 0:09:21.315 **** 2025-09-27 21:41:01.824626 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.824631 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.824636 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.824640 | orchestrator | 2025-09-27 21:41:01.824645 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-09-27 21:41:01.824650 | orchestrator | Saturday 27 September 2025 21:39:35 +0000 (0:00:00.307) 0:09:21.623 **** 2025-09-27 21:41:01.824655 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:01.824660 | orchestrator | 2025-09-27 21:41:01.824665 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-09-27 21:41:01.824669 | orchestrator | Saturday 27 September 2025 21:39:35 +0000 (0:00:00.534) 0:09:22.158 **** 2025-09-27 21:41:01.824674 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:01.824679 | orchestrator | 2025-09-27 21:41:01.824684 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-09-27 21:41:01.824689 | orchestrator | Saturday 27 September 2025 21:39:36 +0000 (0:00:00.747) 0:09:22.905 **** 2025-09-27 21:41:01.824694 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:01.824698 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:01.824703 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:01.824708 | orchestrator | 2025-09-27 21:41:01.824713 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-09-27 21:41:01.824718 | orchestrator | Saturday 27 September 2025 21:39:37 +0000 (0:00:01.245) 0:09:24.150 **** 2025-09-27 21:41:01.824722 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:01.824727 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:01.824732 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:01.824737 | orchestrator | 2025-09-27 21:41:01.824741 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-09-27 21:41:01.824746 | orchestrator | Saturday 27 September 2025 21:39:39 +0000 (0:00:01.156) 0:09:25.307 **** 2025-09-27 21:41:01.824754 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:01.824759 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:01.824766 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:01.824774 | orchestrator | 2025-09-27 21:41:01.824782 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-09-27 21:41:01.824790 | orchestrator | Saturday 27 September 2025 21:39:41 +0000 (0:00:02.158) 0:09:27.465 **** 2025-09-27 21:41:01.824798 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:01.824806 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:01.824815 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:01.824823 | orchestrator | 2025-09-27 21:41:01.824831 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-09-27 21:41:01.824840 | orchestrator | Saturday 27 September 2025 21:39:43 +0000 (0:00:01.931) 0:09:29.397 **** 2025-09-27 21:41:01.824845 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.824849 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.824854 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.824859 | orchestrator | 2025-09-27 21:41:01.824864 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-27 21:41:01.824869 | orchestrator | Saturday 27 September 2025 21:39:44 +0000 (0:00:01.304) 0:09:30.702 **** 2025-09-27 21:41:01.824874 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:01.824879 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:01.824884 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:01.824889 | orchestrator | 2025-09-27 21:41:01.824893 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-27 21:41:01.824898 | orchestrator | Saturday 27 September 2025 21:39:45 +0000 (0:00:00.713) 0:09:31.416 **** 2025-09-27 21:41:01.824903 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:01.824908 | orchestrator | 2025-09-27 21:41:01.824913 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-27 21:41:01.824918 | orchestrator | Saturday 27 September 2025 21:39:45 +0000 (0:00:00.434) 0:09:31.850 **** 2025-09-27 21:41:01.824922 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.824927 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.824932 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.824937 | orchestrator | 2025-09-27 21:41:01.824942 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-27 21:41:01.824946 | orchestrator | Saturday 27 September 2025 21:39:46 +0000 (0:00:00.510) 0:09:32.361 **** 2025-09-27 21:41:01.824951 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:01.824956 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:01.824961 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:01.824966 | orchestrator | 2025-09-27 21:41:01.824971 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-27 21:41:01.824976 | orchestrator | Saturday 27 September 2025 21:39:47 +0000 (0:00:01.194) 0:09:33.555 **** 2025-09-27 21:41:01.824980 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-27 21:41:01.824985 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-27 21:41:01.824990 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-27 21:41:01.824995 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.825000 | orchestrator | 2025-09-27 21:41:01.825004 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-27 21:41:01.825009 | orchestrator | Saturday 27 September 2025 21:39:47 +0000 (0:00:00.585) 0:09:34.140 **** 2025-09-27 21:41:01.825014 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.825019 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.825024 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.825029 | orchestrator | 2025-09-27 21:41:01.825033 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-27 21:41:01.825038 | orchestrator | 2025-09-27 21:41:01.825043 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-27 21:41:01.825082 | orchestrator | Saturday 27 September 2025 21:39:48 +0000 (0:00:00.498) 0:09:34.639 **** 2025-09-27 21:41:01.825088 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:01.825093 | orchestrator | 2025-09-27 21:41:01.825098 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-27 21:41:01.825103 | orchestrator | Saturday 27 September 2025 21:39:49 +0000 (0:00:00.619) 0:09:35.258 **** 2025-09-27 21:41:01.825108 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:01.825112 | orchestrator | 2025-09-27 21:41:01.825117 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-27 21:41:01.825122 | orchestrator | Saturday 27 September 2025 21:39:49 +0000 (0:00:00.446) 0:09:35.705 **** 2025-09-27 21:41:01.825127 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.825132 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.825136 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.825141 | orchestrator | 2025-09-27 21:41:01.825146 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-27 21:41:01.825151 | orchestrator | Saturday 27 September 2025 21:39:49 +0000 (0:00:00.400) 0:09:36.105 **** 2025-09-27 21:41:01.825156 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.825161 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.825166 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.825170 | orchestrator | 2025-09-27 21:41:01.825175 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-27 21:41:01.825180 | orchestrator | Saturday 27 September 2025 21:39:50 +0000 (0:00:00.661) 0:09:36.767 **** 2025-09-27 21:41:01.825185 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.825190 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.825195 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.825199 | orchestrator | 2025-09-27 21:41:01.825204 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-27 21:41:01.825209 | orchestrator | Saturday 27 September 2025 21:39:51 +0000 (0:00:00.748) 0:09:37.515 **** 2025-09-27 21:41:01.825214 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.825219 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.825223 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.825228 | orchestrator | 2025-09-27 21:41:01.825233 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-27 21:41:01.825238 | orchestrator | Saturday 27 September 2025 21:39:52 +0000 (0:00:00.739) 0:09:38.255 **** 2025-09-27 21:41:01.825243 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.825248 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.825253 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.825257 | orchestrator | 2025-09-27 21:41:01.825262 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-27 21:41:01.825267 | orchestrator | Saturday 27 September 2025 21:39:52 +0000 (0:00:00.577) 0:09:38.833 **** 2025-09-27 21:41:01.825274 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.825279 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.825284 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.825289 | orchestrator | 2025-09-27 21:41:01.825294 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-27 21:41:01.825299 | orchestrator | Saturday 27 September 2025 21:39:52 +0000 (0:00:00.311) 0:09:39.144 **** 2025-09-27 21:41:01.825304 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.825309 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.825314 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.825318 | orchestrator | 2025-09-27 21:41:01.825323 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-27 21:41:01.825328 | orchestrator | Saturday 27 September 2025 21:39:53 +0000 (0:00:00.317) 0:09:39.462 **** 2025-09-27 21:41:01.825336 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.825341 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.825346 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.825351 | orchestrator | 2025-09-27 21:41:01.825356 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-27 21:41:01.825361 | orchestrator | Saturday 27 September 2025 21:39:54 +0000 (0:00:00.755) 0:09:40.218 **** 2025-09-27 21:41:01.825366 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.825370 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.825375 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.825380 | orchestrator | 2025-09-27 21:41:01.825385 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-27 21:41:01.825390 | orchestrator | Saturday 27 September 2025 21:39:55 +0000 (0:00:01.024) 0:09:41.242 **** 2025-09-27 21:41:01.825395 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.825400 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.825404 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.825409 | orchestrator | 2025-09-27 21:41:01.825414 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-27 21:41:01.825418 | orchestrator | Saturday 27 September 2025 21:39:55 +0000 (0:00:00.342) 0:09:41.584 **** 2025-09-27 21:41:01.825423 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.825427 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.825432 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.825436 | orchestrator | 2025-09-27 21:41:01.825441 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-27 21:41:01.825446 | orchestrator | Saturday 27 September 2025 21:39:55 +0000 (0:00:00.319) 0:09:41.904 **** 2025-09-27 21:41:01.825450 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.825455 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.825459 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.825464 | orchestrator | 2025-09-27 21:41:01.825469 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-27 21:41:01.825473 | orchestrator | Saturday 27 September 2025 21:39:56 +0000 (0:00:00.329) 0:09:42.234 **** 2025-09-27 21:41:01.825478 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.825482 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.825487 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.825491 | orchestrator | 2025-09-27 21:41:01.825496 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-27 21:41:01.825503 | orchestrator | Saturday 27 September 2025 21:39:56 +0000 (0:00:00.562) 0:09:42.796 **** 2025-09-27 21:41:01.825507 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.825512 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.825517 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.825521 | orchestrator | 2025-09-27 21:41:01.825526 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-27 21:41:01.825530 | orchestrator | Saturday 27 September 2025 21:39:56 +0000 (0:00:00.334) 0:09:43.130 **** 2025-09-27 21:41:01.825535 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.825540 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.825544 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.825549 | orchestrator | 2025-09-27 21:41:01.825553 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-27 21:41:01.825558 | orchestrator | Saturday 27 September 2025 21:39:57 +0000 (0:00:00.309) 0:09:43.440 **** 2025-09-27 21:41:01.825562 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.825567 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.825572 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.825576 | orchestrator | 2025-09-27 21:41:01.825581 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-27 21:41:01.825585 | orchestrator | Saturday 27 September 2025 21:39:57 +0000 (0:00:00.357) 0:09:43.798 **** 2025-09-27 21:41:01.825590 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.825594 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.825602 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.825606 | orchestrator | 2025-09-27 21:41:01.825611 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-27 21:41:01.825616 | orchestrator | Saturday 27 September 2025 21:39:58 +0000 (0:00:00.581) 0:09:44.379 **** 2025-09-27 21:41:01.825620 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.825625 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.825629 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.825634 | orchestrator | 2025-09-27 21:41:01.825638 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-27 21:41:01.825643 | orchestrator | Saturday 27 September 2025 21:39:58 +0000 (0:00:00.331) 0:09:44.711 **** 2025-09-27 21:41:01.825648 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.825652 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.825657 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.825662 | orchestrator | 2025-09-27 21:41:01.825666 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-09-27 21:41:01.825671 | orchestrator | Saturday 27 September 2025 21:39:59 +0000 (0:00:00.548) 0:09:45.259 **** 2025-09-27 21:41:01.825675 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:01.825680 | orchestrator | 2025-09-27 21:41:01.825685 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-27 21:41:01.825689 | orchestrator | Saturday 27 September 2025 21:39:59 +0000 (0:00:00.816) 0:09:46.076 **** 2025-09-27 21:41:01.825694 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 21:41:01.825701 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-27 21:41:01.825705 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-27 21:41:01.825710 | orchestrator | 2025-09-27 21:41:01.825715 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-27 21:41:01.825719 | orchestrator | Saturday 27 September 2025 21:40:02 +0000 (0:00:02.332) 0:09:48.409 **** 2025-09-27 21:41:01.825724 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-27 21:41:01.825728 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-27 21:41:01.825733 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:01.825739 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-27 21:41:01.825746 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-27 21:41:01.825754 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:01.825766 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-27 21:41:01.825773 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-27 21:41:01.825781 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:01.825789 | orchestrator | 2025-09-27 21:41:01.825796 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-09-27 21:41:01.825803 | orchestrator | Saturday 27 September 2025 21:40:03 +0000 (0:00:01.334) 0:09:49.743 **** 2025-09-27 21:41:01.825810 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.825818 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.825825 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.825833 | orchestrator | 2025-09-27 21:41:01.825840 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-09-27 21:41:01.825848 | orchestrator | Saturday 27 September 2025 21:40:03 +0000 (0:00:00.323) 0:09:50.066 **** 2025-09-27 21:41:01.825856 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:01.825864 | orchestrator | 2025-09-27 21:41:01.825872 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-09-27 21:41:01.825880 | orchestrator | Saturday 27 September 2025 21:40:04 +0000 (0:00:00.800) 0:09:50.866 **** 2025-09-27 21:41:01.825889 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-27 21:41:01.825902 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-27 21:41:01.825907 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-27 21:41:01.825911 | orchestrator | 2025-09-27 21:41:01.825916 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-09-27 21:41:01.825923 | orchestrator | Saturday 27 September 2025 21:40:05 +0000 (0:00:00.799) 0:09:51.666 **** 2025-09-27 21:41:01.825928 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 21:41:01.825933 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-27 21:41:01.825937 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 21:41:01.825942 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-27 21:41:01.825946 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 21:41:01.825951 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-27 21:41:01.825956 | orchestrator | 2025-09-27 21:41:01.825960 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-27 21:41:01.825965 | orchestrator | Saturday 27 September 2025 21:40:10 +0000 (0:00:04.989) 0:09:56.655 **** 2025-09-27 21:41:01.825969 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 21:41:01.825974 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-27 21:41:01.825978 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 21:41:01.825983 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-27 21:41:01.825987 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 21:41:01.825992 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-27 21:41:01.825996 | orchestrator | 2025-09-27 21:41:01.826001 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-27 21:41:01.826006 | orchestrator | Saturday 27 September 2025 21:40:13 +0000 (0:00:02.727) 0:09:59.383 **** 2025-09-27 21:41:01.826010 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-27 21:41:01.826029 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:01.826034 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-27 21:41:01.826039 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:01.826043 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-27 21:41:01.826055 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:01.826060 | orchestrator | 2025-09-27 21:41:01.826065 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-09-27 21:41:01.826069 | orchestrator | Saturday 27 September 2025 21:40:14 +0000 (0:00:01.164) 0:10:00.548 **** 2025-09-27 21:41:01.826074 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-09-27 21:41:01.826079 | orchestrator | 2025-09-27 21:41:01.826087 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-09-27 21:41:01.826092 | orchestrator | Saturday 27 September 2025 21:40:14 +0000 (0:00:00.193) 0:10:00.741 **** 2025-09-27 21:41:01.826096 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-27 21:41:01.826101 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-27 21:41:01.826106 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-27 21:41:01.826114 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-27 21:41:01.826119 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-27 21:41:01.826123 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.826128 | orchestrator | 2025-09-27 21:41:01.826132 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-09-27 21:41:01.826137 | orchestrator | Saturday 27 September 2025 21:40:15 +0000 (0:00:00.706) 0:10:01.448 **** 2025-09-27 21:41:01.826142 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-27 21:41:01.826146 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-27 21:41:01.826151 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-27 21:41:01.826156 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-27 21:41:01.826160 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-27 21:41:01.826165 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.826169 | orchestrator | 2025-09-27 21:41:01.826174 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-09-27 21:41:01.826179 | orchestrator | Saturday 27 September 2025 21:40:15 +0000 (0:00:00.732) 0:10:02.180 **** 2025-09-27 21:41:01.826183 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-27 21:41:01.826190 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-27 21:41:01.826195 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-27 21:41:01.826200 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-27 21:41:01.826205 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-27 21:41:01.826209 | orchestrator | 2025-09-27 21:41:01.826214 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-09-27 21:41:01.826218 | orchestrator | Saturday 27 September 2025 21:40:47 +0000 (0:00:31.697) 0:10:33.878 **** 2025-09-27 21:41:01.826223 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.826228 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.826232 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.826237 | orchestrator | 2025-09-27 21:41:01.826241 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-09-27 21:41:01.826246 | orchestrator | Saturday 27 September 2025 21:40:48 +0000 (0:00:00.562) 0:10:34.440 **** 2025-09-27 21:41:01.826250 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.826255 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.826260 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.826264 | orchestrator | 2025-09-27 21:41:01.826269 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-09-27 21:41:01.826273 | orchestrator | Saturday 27 September 2025 21:40:48 +0000 (0:00:00.327) 0:10:34.767 **** 2025-09-27 21:41:01.826280 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:01.826285 | orchestrator | 2025-09-27 21:41:01.826290 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-09-27 21:41:01.826294 | orchestrator | Saturday 27 September 2025 21:40:49 +0000 (0:00:00.521) 0:10:35.289 **** 2025-09-27 21:41:01.826299 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:01.826304 | orchestrator | 2025-09-27 21:41:01.826308 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-09-27 21:41:01.826313 | orchestrator | Saturday 27 September 2025 21:40:49 +0000 (0:00:00.779) 0:10:36.068 **** 2025-09-27 21:41:01.826317 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:01.826324 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:01.826329 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:01.826333 | orchestrator | 2025-09-27 21:41:01.826338 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-09-27 21:41:01.826343 | orchestrator | Saturday 27 September 2025 21:40:51 +0000 (0:00:01.355) 0:10:37.424 **** 2025-09-27 21:41:01.826347 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:01.826352 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:01.826356 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:01.826361 | orchestrator | 2025-09-27 21:41:01.826366 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-09-27 21:41:01.826370 | orchestrator | Saturday 27 September 2025 21:40:52 +0000 (0:00:01.142) 0:10:38.566 **** 2025-09-27 21:41:01.826375 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:41:01.826379 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:41:01.826384 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:41:01.826388 | orchestrator | 2025-09-27 21:41:01.826393 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-09-27 21:41:01.826398 | orchestrator | Saturday 27 September 2025 21:40:54 +0000 (0:00:02.005) 0:10:40.572 **** 2025-09-27 21:41:01.826402 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-27 21:41:01.826407 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-27 21:41:01.826411 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-27 21:41:01.826416 | orchestrator | 2025-09-27 21:41:01.826421 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-27 21:41:01.826425 | orchestrator | Saturday 27 September 2025 21:40:56 +0000 (0:00:02.356) 0:10:42.929 **** 2025-09-27 21:41:01.826430 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.826434 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.826439 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.826443 | orchestrator | 2025-09-27 21:41:01.826448 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-27 21:41:01.826453 | orchestrator | Saturday 27 September 2025 21:40:57 +0000 (0:00:00.653) 0:10:43.583 **** 2025-09-27 21:41:01.826457 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:41:01.826462 | orchestrator | 2025-09-27 21:41:01.826466 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-27 21:41:01.826471 | orchestrator | Saturday 27 September 2025 21:40:57 +0000 (0:00:00.538) 0:10:44.121 **** 2025-09-27 21:41:01.826476 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.826480 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.826485 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.826489 | orchestrator | 2025-09-27 21:41:01.826494 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-27 21:41:01.826500 | orchestrator | Saturday 27 September 2025 21:40:58 +0000 (0:00:00.327) 0:10:44.449 **** 2025-09-27 21:41:01.826508 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.826512 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:41:01.826517 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:41:01.826522 | orchestrator | 2025-09-27 21:41:01.826526 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-27 21:41:01.826531 | orchestrator | Saturday 27 September 2025 21:40:58 +0000 (0:00:00.593) 0:10:45.042 **** 2025-09-27 21:41:01.826535 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-27 21:41:01.826540 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-27 21:41:01.826544 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-27 21:41:01.826549 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:41:01.826554 | orchestrator | 2025-09-27 21:41:01.826558 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-27 21:41:01.826563 | orchestrator | Saturday 27 September 2025 21:40:59 +0000 (0:00:00.626) 0:10:45.669 **** 2025-09-27 21:41:01.826567 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:41:01.826572 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:41:01.826576 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:41:01.826581 | orchestrator | 2025-09-27 21:41:01.826586 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:41:01.826590 | orchestrator | testbed-node-0 : ok=141  changed=35  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-09-27 21:41:01.826595 | orchestrator | testbed-node-1 : ok=127  changed=32  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-09-27 21:41:01.826600 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-09-27 21:41:01.826604 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-09-27 21:41:01.826609 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-09-27 21:41:01.826614 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-09-27 21:41:01.826618 | orchestrator | 2025-09-27 21:41:01.826623 | orchestrator | 2025-09-27 21:41:01.826627 | orchestrator | 2025-09-27 21:41:01.826632 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:41:01.826639 | orchestrator | Saturday 27 September 2025 21:40:59 +0000 (0:00:00.231) 0:10:45.900 **** 2025-09-27 21:41:01.826644 | orchestrator | =============================================================================== 2025-09-27 21:41:01.826648 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 45.45s 2025-09-27 21:41:01.826653 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 42.77s 2025-09-27 21:41:01.826657 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.68s 2025-09-27 21:41:01.826662 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.70s 2025-09-27 21:41:01.826666 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.09s 2025-09-27 21:41:01.826671 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.87s 2025-09-27 21:41:01.826675 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.76s 2025-09-27 21:41:01.826680 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 11.33s 2025-09-27 21:41:01.826685 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.29s 2025-09-27 21:41:01.826689 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.31s 2025-09-27 21:41:01.826697 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 7.01s 2025-09-27 21:41:01.826701 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.61s 2025-09-27 21:41:01.826706 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 5.15s 2025-09-27 21:41:01.826710 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.14s 2025-09-27 21:41:01.826715 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.99s 2025-09-27 21:41:01.826720 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.00s 2025-09-27 21:41:01.826724 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.85s 2025-09-27 21:41:01.826729 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.85s 2025-09-27 21:41:01.826733 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.76s 2025-09-27 21:41:01.826738 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.64s 2025-09-27 21:41:01.826742 | orchestrator | 2025-09-27 21:41:01 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:41:04.845974 | orchestrator | 2025-09-27 21:41:04 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:41:04.847237 | orchestrator | 2025-09-27 21:41:04 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:41:04.849032 | orchestrator | 2025-09-27 21:41:04 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:41:04.849368 | orchestrator | 2025-09-27 21:41:04 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:41:07.892839 | orchestrator | 2025-09-27 21:41:07 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:41:07.893352 | orchestrator | 2025-09-27 21:41:07 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:41:07.895794 | orchestrator | 2025-09-27 21:41:07 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:41:07.895869 | orchestrator | 2025-09-27 21:41:07 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:41:10.935976 | orchestrator | 2025-09-27 21:41:10 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:41:10.937369 | orchestrator | 2025-09-27 21:41:10 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:41:10.938857 | orchestrator | 2025-09-27 21:41:10 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:41:10.939228 | orchestrator | 2025-09-27 21:41:10 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:41:13.979457 | orchestrator | 2025-09-27 21:41:13 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:41:13.980183 | orchestrator | 2025-09-27 21:41:13 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:41:13.982458 | orchestrator | 2025-09-27 21:41:13 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:41:13.982482 | orchestrator | 2025-09-27 21:41:13 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:41:17.033185 | orchestrator | 2025-09-27 21:41:17 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:41:17.036859 | orchestrator | 2025-09-27 21:41:17 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:41:17.038556 | orchestrator | 2025-09-27 21:41:17 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:41:17.038741 | orchestrator | 2025-09-27 21:41:17 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:41:20.079535 | orchestrator | 2025-09-27 21:41:20 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:41:20.079965 | orchestrator | 2025-09-27 21:41:20 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:41:20.080443 | orchestrator | 2025-09-27 21:41:20 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:41:20.080923 | orchestrator | 2025-09-27 21:41:20 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:41:23.117774 | orchestrator | 2025-09-27 21:41:23 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:41:23.119667 | orchestrator | 2025-09-27 21:41:23 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:41:23.121762 | orchestrator | 2025-09-27 21:41:23 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:41:23.121841 | orchestrator | 2025-09-27 21:41:23 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:41:26.163262 | orchestrator | 2025-09-27 21:41:26 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:41:26.163835 | orchestrator | 2025-09-27 21:41:26 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:41:26.164660 | orchestrator | 2025-09-27 21:41:26 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:41:26.165798 | orchestrator | 2025-09-27 21:41:26 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:41:29.213417 | orchestrator | 2025-09-27 21:41:29 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:41:29.215025 | orchestrator | 2025-09-27 21:41:29 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:41:29.216991 | orchestrator | 2025-09-27 21:41:29 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:41:29.217062 | orchestrator | 2025-09-27 21:41:29 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:41:32.262477 | orchestrator | 2025-09-27 21:41:32 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:41:32.264348 | orchestrator | 2025-09-27 21:41:32 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:41:32.266583 | orchestrator | 2025-09-27 21:41:32 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state STARTED 2025-09-27 21:41:32.267352 | orchestrator | 2025-09-27 21:41:32 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:41:35.314651 | orchestrator | 2025-09-27 21:41:35 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:41:35.315673 | orchestrator | 2025-09-27 21:41:35 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:41:35.318553 | orchestrator | 2025-09-27 21:41:35 | INFO  | Task 5570fb53-c4b5-4221-9137-dbc593d8f089 is in state SUCCESS 2025-09-27 21:41:35.318732 | orchestrator | 2025-09-27 21:41:35 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:41:35.319709 | orchestrator | 2025-09-27 21:41:35.319744 | orchestrator | 2025-09-27 21:41:35.319756 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 21:41:35.319768 | orchestrator | 2025-09-27 21:41:35.319780 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 21:41:35.319791 | orchestrator | Saturday 27 September 2025 21:38:44 +0000 (0:00:00.209) 0:00:00.210 **** 2025-09-27 21:41:35.319802 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:35.319814 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:35.319825 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:35.319836 | orchestrator | 2025-09-27 21:41:35.319847 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 21:41:35.319881 | orchestrator | Saturday 27 September 2025 21:38:44 +0000 (0:00:00.221) 0:00:00.431 **** 2025-09-27 21:41:35.319893 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-09-27 21:41:35.319904 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-09-27 21:41:35.319915 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-09-27 21:41:35.319926 | orchestrator | 2025-09-27 21:41:35.319937 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-09-27 21:41:35.319948 | orchestrator | 2025-09-27 21:41:35.319959 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-27 21:41:35.319970 | orchestrator | Saturday 27 September 2025 21:38:44 +0000 (0:00:00.287) 0:00:00.719 **** 2025-09-27 21:41:35.319981 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:41:35.319992 | orchestrator | 2025-09-27 21:41:35.320003 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-09-27 21:41:35.320014 | orchestrator | Saturday 27 September 2025 21:38:44 +0000 (0:00:00.373) 0:00:01.092 **** 2025-09-27 21:41:35.320046 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-27 21:41:35.320057 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-27 21:41:35.320068 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-27 21:41:35.320080 | orchestrator | 2025-09-27 21:41:35.320091 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-09-27 21:41:35.320102 | orchestrator | Saturday 27 September 2025 21:38:45 +0000 (0:00:00.576) 0:00:01.669 **** 2025-09-27 21:41:35.320116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-27 21:41:35.320131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-27 21:41:35.320170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-27 21:41:35.320194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-27 21:41:35.320209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-27 21:41:35.320223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-27 21:41:35.320238 | orchestrator | 2025-09-27 21:41:35.320250 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-27 21:41:35.320261 | orchestrator | Saturday 27 September 2025 21:38:46 +0000 (0:00:01.414) 0:00:03.084 **** 2025-09-27 21:41:35.320276 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:41:35.320288 | orchestrator | 2025-09-27 21:41:35.320299 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-09-27 21:41:35.320316 | orchestrator | Saturday 27 September 2025 21:38:47 +0000 (0:00:00.438) 0:00:03.522 **** 2025-09-27 21:41:35.320337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-27 21:41:35.320352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-27 21:41:35.320366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-27 21:41:35.320379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-27 21:41:35.320400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-27 21:41:35.320421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-27 21:41:35.320434 | orchestrator | 2025-09-27 21:41:35.320446 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-09-27 21:41:35.320459 | orchestrator | Saturday 27 September 2025 21:38:49 +0000 (0:00:02.300) 0:00:05.822 **** 2025-09-27 21:41:35.320473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-27 21:41:35.320520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-27 21:41:35.320540 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:35.320555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-27 21:41:35.320576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-27 21:41:35.320589 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:35.320601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-27 21:41:35.320613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-27 21:41:35.320625 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:35.320642 | orchestrator | 2025-09-27 21:41:35.320653 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-09-27 21:41:35.320664 | orchestrator | Saturday 27 September 2025 21:38:50 +0000 (0:00:01.181) 0:00:07.004 **** 2025-09-27 21:41:35.320680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-27 21:41:35.320699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-27 21:41:35.320711 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:35.320723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-27 21:41:35.320735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-27 21:41:35.320752 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:35.320769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-27 21:41:35.320788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-27 21:41:35.320800 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:35.320811 | orchestrator | 2025-09-27 21:41:35.320822 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-09-27 21:41:35.320833 | orchestrator | Saturday 27 September 2025 21:38:51 +0000 (0:00:01.093) 0:00:08.097 **** 2025-09-27 21:41:35.320844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-27 21:41:35.320856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-27 21:41:35.320873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-27 21:41:35.320902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-27 21:41:35.320915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-27 21:41:35.320928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-27 21:41:35.320946 | orchestrator | 2025-09-27 21:41:35.320957 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-09-27 21:41:35.320968 | orchestrator | Saturday 27 September 2025 21:38:54 +0000 (0:00:02.255) 0:00:10.353 **** 2025-09-27 21:41:35.320979 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:35.320990 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:35.321001 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:35.321012 | orchestrator | 2025-09-27 21:41:35.321092 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-09-27 21:41:35.321107 | orchestrator | Saturday 27 September 2025 21:38:57 +0000 (0:00:02.930) 0:00:13.283 **** 2025-09-27 21:41:35.321117 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:35.321128 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:35.321139 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:35.321150 | orchestrator | 2025-09-27 21:41:35.321161 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-09-27 21:41:35.321172 | orchestrator | Saturday 27 September 2025 21:38:59 +0000 (0:00:02.365) 0:00:15.649 **** 2025-09-27 21:41:35.321189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-27 21:41:35.321208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-27 21:41:35.321221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-27 21:41:35.321233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-27 21:41:35.321258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-27 21:41:35.321277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-27 21:41:35.321289 | orchestrator | 2025-09-27 21:41:35.321300 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-27 21:41:35.321311 | orchestrator | Saturday 27 September 2025 21:39:01 +0000 (0:00:02.388) 0:00:18.037 **** 2025-09-27 21:41:35.321322 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:35.321333 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:35.321344 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:35.321354 | orchestrator | 2025-09-27 21:41:35.321365 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-27 21:41:35.321376 | orchestrator | Saturday 27 September 2025 21:39:02 +0000 (0:00:00.423) 0:00:18.461 **** 2025-09-27 21:41:35.321387 | orchestrator | 2025-09-27 21:41:35.321398 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-27 21:41:35.321409 | orchestrator | Saturday 27 September 2025 21:39:02 +0000 (0:00:00.058) 0:00:18.519 **** 2025-09-27 21:41:35.321420 | orchestrator | 2025-09-27 21:41:35.321431 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-27 21:41:35.321442 | orchestrator | Saturday 27 September 2025 21:39:02 +0000 (0:00:00.064) 0:00:18.584 **** 2025-09-27 21:41:35.321460 | orchestrator | 2025-09-27 21:41:35.321471 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-09-27 21:41:35.321482 | orchestrator | Saturday 27 September 2025 21:39:02 +0000 (0:00:00.067) 0:00:18.651 **** 2025-09-27 21:41:35.321493 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:35.321503 | orchestrator | 2025-09-27 21:41:35.321514 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-09-27 21:41:35.321525 | orchestrator | Saturday 27 September 2025 21:39:02 +0000 (0:00:00.210) 0:00:18.861 **** 2025-09-27 21:41:35.321536 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:35.321547 | orchestrator | 2025-09-27 21:41:35.321558 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-09-27 21:41:35.321569 | orchestrator | Saturday 27 September 2025 21:39:03 +0000 (0:00:00.653) 0:00:19.515 **** 2025-09-27 21:41:35.321579 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:35.321589 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:35.321599 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:35.321608 | orchestrator | 2025-09-27 21:41:35.321618 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-09-27 21:41:35.321628 | orchestrator | Saturday 27 September 2025 21:40:00 +0000 (0:00:57.167) 0:01:16.683 **** 2025-09-27 21:41:35.321637 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:35.321647 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:35.321656 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:35.321666 | orchestrator | 2025-09-27 21:41:35.321676 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-27 21:41:35.321685 | orchestrator | Saturday 27 September 2025 21:41:21 +0000 (0:01:20.540) 0:02:37.223 **** 2025-09-27 21:41:35.321695 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:41:35.321705 | orchestrator | 2025-09-27 21:41:35.321714 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-09-27 21:41:35.321724 | orchestrator | Saturday 27 September 2025 21:41:21 +0000 (0:00:00.544) 0:02:37.768 **** 2025-09-27 21:41:35.321733 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:35.321743 | orchestrator | 2025-09-27 21:41:35.321753 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-09-27 21:41:35.321762 | orchestrator | Saturday 27 September 2025 21:41:24 +0000 (0:00:02.942) 0:02:40.710 **** 2025-09-27 21:41:35.321772 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:35.321782 | orchestrator | 2025-09-27 21:41:35.321791 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-09-27 21:41:35.321801 | orchestrator | Saturday 27 September 2025 21:41:27 +0000 (0:00:02.436) 0:02:43.146 **** 2025-09-27 21:41:35.321811 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:35.321820 | orchestrator | 2025-09-27 21:41:35.321830 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-09-27 21:41:35.321840 | orchestrator | Saturday 27 September 2025 21:41:29 +0000 (0:00:02.881) 0:02:46.028 **** 2025-09-27 21:41:35.321853 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:35.321863 | orchestrator | 2025-09-27 21:41:35.321873 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:41:35.321884 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-27 21:41:35.321894 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-27 21:41:35.321905 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-27 21:41:35.321915 | orchestrator | 2025-09-27 21:41:35.321924 | orchestrator | 2025-09-27 21:41:35.321934 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:41:35.321955 | orchestrator | Saturday 27 September 2025 21:41:32 +0000 (0:00:02.765) 0:02:48.793 **** 2025-09-27 21:41:35.321965 | orchestrator | =============================================================================== 2025-09-27 21:41:35.321975 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 80.54s 2025-09-27 21:41:35.321985 | orchestrator | opensearch : Restart opensearch container ------------------------------ 57.17s 2025-09-27 21:41:35.321994 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.94s 2025-09-27 21:41:35.322004 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.93s 2025-09-27 21:41:35.322013 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.88s 2025-09-27 21:41:35.322094 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.77s 2025-09-27 21:41:35.322105 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.44s 2025-09-27 21:41:35.322114 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.39s 2025-09-27 21:41:35.322124 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.37s 2025-09-27 21:41:35.322134 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.30s 2025-09-27 21:41:35.322143 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.26s 2025-09-27 21:41:35.322153 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.41s 2025-09-27 21:41:35.322162 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.18s 2025-09-27 21:41:35.322172 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.09s 2025-09-27 21:41:35.322182 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.65s 2025-09-27 21:41:35.322191 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.58s 2025-09-27 21:41:35.322201 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.54s 2025-09-27 21:41:35.322211 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.44s 2025-09-27 21:41:35.322220 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.42s 2025-09-27 21:41:35.322230 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.37s 2025-09-27 21:41:38.365076 | orchestrator | 2025-09-27 21:41:38 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:41:38.366468 | orchestrator | 2025-09-27 21:41:38 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:41:38.366688 | orchestrator | 2025-09-27 21:41:38 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:41:41.423312 | orchestrator | 2025-09-27 21:41:41 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:41:41.424617 | orchestrator | 2025-09-27 21:41:41 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:41:41.424714 | orchestrator | 2025-09-27 21:41:41 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:41:44.466439 | orchestrator | 2025-09-27 21:41:44 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:41:44.467920 | orchestrator | 2025-09-27 21:41:44 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:41:44.467974 | orchestrator | 2025-09-27 21:41:44 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:41:47.518393 | orchestrator | 2025-09-27 21:41:47 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:41:47.519894 | orchestrator | 2025-09-27 21:41:47 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:41:47.519955 | orchestrator | 2025-09-27 21:41:47 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:41:50.569933 | orchestrator | 2025-09-27 21:41:50 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state STARTED 2025-09-27 21:41:50.570196 | orchestrator | 2025-09-27 21:41:50 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:41:50.570235 | orchestrator | 2025-09-27 21:41:50 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:41:53.619443 | orchestrator | 2025-09-27 21:41:53 | INFO  | Task e87772c1-92f1-4a43-9188-255c1951b75c is in state STARTED 2025-09-27 21:41:53.621368 | orchestrator | 2025-09-27 21:41:53 | INFO  | Task a6c6058a-d107-4f6c-99b1-07ab025bbad4 is in state SUCCESS 2025-09-27 21:41:53.623194 | orchestrator | 2025-09-27 21:41:53.623247 | orchestrator | 2025-09-27 21:41:53.623268 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-09-27 21:41:53.623289 | orchestrator | 2025-09-27 21:41:53.623308 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-27 21:41:53.623327 | orchestrator | Saturday 27 September 2025 21:38:44 +0000 (0:00:00.095) 0:00:00.095 **** 2025-09-27 21:41:53.623346 | orchestrator | ok: [localhost] => { 2025-09-27 21:41:53.623367 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-09-27 21:41:53.623387 | orchestrator | } 2025-09-27 21:41:53.623407 | orchestrator | 2025-09-27 21:41:53.623426 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-09-27 21:41:53.623444 | orchestrator | Saturday 27 September 2025 21:38:44 +0000 (0:00:00.041) 0:00:00.136 **** 2025-09-27 21:41:53.623462 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-09-27 21:41:53.623483 | orchestrator | ...ignoring 2025-09-27 21:41:53.623503 | orchestrator | 2025-09-27 21:41:53.623522 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-09-27 21:41:53.623540 | orchestrator | Saturday 27 September 2025 21:38:46 +0000 (0:00:02.765) 0:00:02.902 **** 2025-09-27 21:41:53.623559 | orchestrator | skipping: [localhost] 2025-09-27 21:41:53.623578 | orchestrator | 2025-09-27 21:41:53.623596 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-09-27 21:41:53.623615 | orchestrator | Saturday 27 September 2025 21:38:46 +0000 (0:00:00.044) 0:00:02.947 **** 2025-09-27 21:41:53.623634 | orchestrator | ok: [localhost] 2025-09-27 21:41:53.623652 | orchestrator | 2025-09-27 21:41:53.623671 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 21:41:53.623690 | orchestrator | 2025-09-27 21:41:53.623709 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 21:41:53.623727 | orchestrator | Saturday 27 September 2025 21:38:47 +0000 (0:00:00.135) 0:00:03.082 **** 2025-09-27 21:41:53.623745 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:53.623765 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:53.623785 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:53.623805 | orchestrator | 2025-09-27 21:41:53.623826 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 21:41:53.623846 | orchestrator | Saturday 27 September 2025 21:38:47 +0000 (0:00:00.256) 0:00:03.339 **** 2025-09-27 21:41:53.623866 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-09-27 21:41:53.623887 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-09-27 21:41:53.623908 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-09-27 21:41:53.623928 | orchestrator | 2025-09-27 21:41:53.623950 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-09-27 21:41:53.623972 | orchestrator | 2025-09-27 21:41:53.623994 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-09-27 21:41:53.624046 | orchestrator | Saturday 27 September 2025 21:38:47 +0000 (0:00:00.430) 0:00:03.769 **** 2025-09-27 21:41:53.624069 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-27 21:41:53.624124 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-27 21:41:53.624144 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-27 21:41:53.624163 | orchestrator | 2025-09-27 21:41:53.624183 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-27 21:41:53.624202 | orchestrator | Saturday 27 September 2025 21:38:48 +0000 (0:00:00.380) 0:00:04.149 **** 2025-09-27 21:41:53.624222 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:41:53.624243 | orchestrator | 2025-09-27 21:41:53.624263 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-09-27 21:41:53.624282 | orchestrator | Saturday 27 September 2025 21:38:48 +0000 (0:00:00.473) 0:00:04.623 **** 2025-09-27 21:41:53.624352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-27 21:41:53.624379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-27 21:41:53.624427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-27 21:41:53.624450 | orchestrator | 2025-09-27 21:41:53.624480 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-09-27 21:41:53.624499 | orchestrator | Saturday 27 September 2025 21:38:51 +0000 (0:00:02.967) 0:00:07.591 **** 2025-09-27 21:41:53.624519 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:53.624540 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:53.624560 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:53.624580 | orchestrator | 2025-09-27 21:41:53.624600 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-09-27 21:41:53.624619 | orchestrator | Saturday 27 September 2025 21:38:52 +0000 (0:00:00.698) 0:00:08.289 **** 2025-09-27 21:41:53.624639 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:53.624659 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:53.624679 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:53.624697 | orchestrator | 2025-09-27 21:41:53.624714 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-09-27 21:41:53.624733 | orchestrator | Saturday 27 September 2025 21:38:53 +0000 (0:00:01.448) 0:00:09.738 **** 2025-09-27 21:41:53.624754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-27 21:41:53.624809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-27 21:41:53.624830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-27 21:41:53.624860 | orchestrator | 2025-09-27 21:41:53.624878 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-09-27 21:41:53.624897 | orchestrator | Saturday 27 September 2025 21:38:57 +0000 (0:00:03.984) 0:00:13.722 **** 2025-09-27 21:41:53.624915 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:53.624932 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:53.624950 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:53.624967 | orchestrator | 2025-09-27 21:41:53.624985 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-09-27 21:41:53.625003 | orchestrator | Saturday 27 September 2025 21:38:58 +0000 (0:00:01.269) 0:00:14.992 **** 2025-09-27 21:41:53.625049 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:53.625067 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:53.625085 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:53.625105 | orchestrator | 2025-09-27 21:41:53.625123 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-27 21:41:53.625143 | orchestrator | Saturday 27 September 2025 21:39:03 +0000 (0:00:04.528) 0:00:19.520 **** 2025-09-27 21:41:53.625163 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:41:53.625183 | orchestrator | 2025-09-27 21:41:53.625203 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-27 21:41:53.625223 | orchestrator | Saturday 27 September 2025 21:39:04 +0000 (0:00:00.577) 0:00:20.098 **** 2025-09-27 21:41:53.625264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-27 21:41:53.625297 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:53.625317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-27 21:41:53.625336 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:53.625374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-27 21:41:53.625396 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:53.625414 | orchestrator | 2025-09-27 21:41:53.625433 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-27 21:41:53.625461 | orchestrator | Saturday 27 September 2025 21:39:07 +0000 (0:00:02.999) 0:00:23.098 **** 2025-09-27 21:41:53.625483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-27 21:41:53.625502 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:53.625533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-27 21:41:53.625546 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:53.625558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-27 21:41:53.625577 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:53.625588 | orchestrator | 2025-09-27 21:41:53.625599 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-27 21:41:53.625610 | orchestrator | Saturday 27 September 2025 21:39:09 +0000 (0:00:02.686) 0:00:25.784 **** 2025-09-27 21:41:53.625626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-27 21:41:53.625638 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:53.625659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-27 21:41:53.625684 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:53.625696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-27 21:41:53.625707 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:53.625727 | orchestrator | 2025-09-27 21:41:53.625745 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-09-27 21:41:53.625763 | orchestrator | Saturday 27 September 2025 21:39:12 +0000 (0:00:02.479) 0:00:28.263 **** 2025-09-27 21:41:53.625803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-27 21:41:53.625833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-27 21:41:53.625861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-27 21:41:53.625882 | orchestrator | 2025-09-27 21:41:53.625893 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-09-27 21:41:53.625904 | orchestrator | Saturday 27 September 2025 21:39:14 +0000 (0:00:02.378) 0:00:30.642 **** 2025-09-27 21:41:53.625915 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:53.625927 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:53.625937 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:53.625948 | orchestrator | 2025-09-27 21:41:53.625959 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-09-27 21:41:53.625970 | orchestrator | Saturday 27 September 2025 21:39:15 +0000 (0:00:00.731) 0:00:31.374 **** 2025-09-27 21:41:53.625981 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:53.625992 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:53.626003 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:53.626137 | orchestrator | 2025-09-27 21:41:53.626151 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-09-27 21:41:53.626162 | orchestrator | Saturday 27 September 2025 21:39:15 +0000 (0:00:00.374) 0:00:31.748 **** 2025-09-27 21:41:53.626173 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:53.626184 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:53.626195 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:53.626206 | orchestrator | 2025-09-27 21:41:53.626217 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-09-27 21:41:53.626228 | orchestrator | Saturday 27 September 2025 21:39:15 +0000 (0:00:00.274) 0:00:32.022 **** 2025-09-27 21:41:53.626241 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-09-27 21:41:53.626252 | orchestrator | ...ignoring 2025-09-27 21:41:53.626264 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-09-27 21:41:53.626275 | orchestrator | ...ignoring 2025-09-27 21:41:53.626286 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-09-27 21:41:53.626297 | orchestrator | ...ignoring 2025-09-27 21:41:53.626307 | orchestrator | 2025-09-27 21:41:53.626318 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-09-27 21:41:53.626329 | orchestrator | Saturday 27 September 2025 21:39:26 +0000 (0:00:10.818) 0:00:42.841 **** 2025-09-27 21:41:53.626340 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:53.626351 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:53.626362 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:53.626371 | orchestrator | 2025-09-27 21:41:53.626381 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-09-27 21:41:53.626398 | orchestrator | Saturday 27 September 2025 21:39:27 +0000 (0:00:00.467) 0:00:43.308 **** 2025-09-27 21:41:53.626408 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:53.626418 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:53.626428 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:53.626437 | orchestrator | 2025-09-27 21:41:53.626447 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-09-27 21:41:53.626457 | orchestrator | Saturday 27 September 2025 21:39:27 +0000 (0:00:00.619) 0:00:43.928 **** 2025-09-27 21:41:53.626467 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:53.626476 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:53.626486 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:53.626496 | orchestrator | 2025-09-27 21:41:53.626505 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-09-27 21:41:53.626515 | orchestrator | Saturday 27 September 2025 21:39:28 +0000 (0:00:00.405) 0:00:44.333 **** 2025-09-27 21:41:53.626524 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:53.626534 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:53.626544 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:53.626553 | orchestrator | 2025-09-27 21:41:53.626563 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-09-27 21:41:53.626573 | orchestrator | Saturday 27 September 2025 21:39:28 +0000 (0:00:00.405) 0:00:44.739 **** 2025-09-27 21:41:53.626582 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:53.626598 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:53.626608 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:53.626617 | orchestrator | 2025-09-27 21:41:53.626627 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-09-27 21:41:53.626637 | orchestrator | Saturday 27 September 2025 21:39:29 +0000 (0:00:00.462) 0:00:45.201 **** 2025-09-27 21:41:53.626653 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:53.626664 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:53.626673 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:53.626683 | orchestrator | 2025-09-27 21:41:53.626693 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-27 21:41:53.626702 | orchestrator | Saturday 27 September 2025 21:39:29 +0000 (0:00:00.741) 0:00:45.943 **** 2025-09-27 21:41:53.626715 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:53.626732 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:53.626748 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-09-27 21:41:53.626764 | orchestrator | 2025-09-27 21:41:53.626782 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-09-27 21:41:53.626798 | orchestrator | Saturday 27 September 2025 21:39:30 +0000 (0:00:00.385) 0:00:46.328 **** 2025-09-27 21:41:53.626816 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:53.626827 | orchestrator | 2025-09-27 21:41:53.626836 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-09-27 21:41:53.626846 | orchestrator | Saturday 27 September 2025 21:39:40 +0000 (0:00:10.421) 0:00:56.749 **** 2025-09-27 21:41:53.626855 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:53.626865 | orchestrator | 2025-09-27 21:41:53.626874 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-27 21:41:53.626884 | orchestrator | Saturday 27 September 2025 21:39:40 +0000 (0:00:00.111) 0:00:56.861 **** 2025-09-27 21:41:53.626893 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:53.626903 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:53.626913 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:53.626922 | orchestrator | 2025-09-27 21:41:53.626932 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-09-27 21:41:53.626941 | orchestrator | Saturday 27 September 2025 21:39:41 +0000 (0:00:00.820) 0:00:57.682 **** 2025-09-27 21:41:53.626951 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:53.626960 | orchestrator | 2025-09-27 21:41:53.626970 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-09-27 21:41:53.626987 | orchestrator | Saturday 27 September 2025 21:39:48 +0000 (0:00:06.972) 0:01:04.654 **** 2025-09-27 21:41:53.626997 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:53.627007 | orchestrator | 2025-09-27 21:41:53.627046 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-09-27 21:41:53.627057 | orchestrator | Saturday 27 September 2025 21:39:51 +0000 (0:00:02.538) 0:01:07.192 **** 2025-09-27 21:41:53.627066 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:53.627076 | orchestrator | 2025-09-27 21:41:53.627085 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-09-27 21:41:53.627095 | orchestrator | Saturday 27 September 2025 21:39:53 +0000 (0:00:02.577) 0:01:09.770 **** 2025-09-27 21:41:53.627104 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:53.627114 | orchestrator | 2025-09-27 21:41:53.627123 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-09-27 21:41:53.627133 | orchestrator | Saturday 27 September 2025 21:39:53 +0000 (0:00:00.118) 0:01:09.888 **** 2025-09-27 21:41:53.627142 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:53.627152 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:53.627161 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:53.627171 | orchestrator | 2025-09-27 21:41:53.627180 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-09-27 21:41:53.627190 | orchestrator | Saturday 27 September 2025 21:39:54 +0000 (0:00:00.313) 0:01:10.202 **** 2025-09-27 21:41:53.627199 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:53.627209 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-09-27 21:41:53.627218 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:53.627228 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:53.627237 | orchestrator | 2025-09-27 21:41:53.627247 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-09-27 21:41:53.627256 | orchestrator | skipping: no hosts matched 2025-09-27 21:41:53.627266 | orchestrator | 2025-09-27 21:41:53.627275 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-27 21:41:53.627285 | orchestrator | 2025-09-27 21:41:53.627294 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-27 21:41:53.627303 | orchestrator | Saturday 27 September 2025 21:39:54 +0000 (0:00:00.551) 0:01:10.754 **** 2025-09-27 21:41:53.627313 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:41:53.627323 | orchestrator | 2025-09-27 21:41:53.627332 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-27 21:41:53.627342 | orchestrator | Saturday 27 September 2025 21:40:13 +0000 (0:00:18.433) 0:01:29.187 **** 2025-09-27 21:41:53.627352 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:53.627361 | orchestrator | 2025-09-27 21:41:53.627371 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-27 21:41:53.627380 | orchestrator | Saturday 27 September 2025 21:40:34 +0000 (0:00:21.638) 0:01:50.826 **** 2025-09-27 21:41:53.627390 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:53.627399 | orchestrator | 2025-09-27 21:41:53.627409 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-27 21:41:53.627418 | orchestrator | 2025-09-27 21:41:53.627428 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-27 21:41:53.627437 | orchestrator | Saturday 27 September 2025 21:40:37 +0000 (0:00:02.306) 0:01:53.132 **** 2025-09-27 21:41:53.627447 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:41:53.627456 | orchestrator | 2025-09-27 21:41:53.627466 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-27 21:41:53.627475 | orchestrator | Saturday 27 September 2025 21:40:59 +0000 (0:00:22.137) 0:02:15.269 **** 2025-09-27 21:41:53.627485 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:53.627494 | orchestrator | 2025-09-27 21:41:53.627508 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-27 21:41:53.627518 | orchestrator | Saturday 27 September 2025 21:41:14 +0000 (0:00:15.579) 0:02:30.848 **** 2025-09-27 21:41:53.627533 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:53.627543 | orchestrator | 2025-09-27 21:41:53.627552 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-09-27 21:41:53.627562 | orchestrator | 2025-09-27 21:41:53.627578 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-27 21:41:53.627589 | orchestrator | Saturday 27 September 2025 21:41:17 +0000 (0:00:02.595) 0:02:33.444 **** 2025-09-27 21:41:53.627598 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:53.627608 | orchestrator | 2025-09-27 21:41:53.627617 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-27 21:41:53.627627 | orchestrator | Saturday 27 September 2025 21:41:34 +0000 (0:00:17.025) 0:02:50.469 **** 2025-09-27 21:41:53.627636 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:53.627646 | orchestrator | 2025-09-27 21:41:53.627656 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-27 21:41:53.627665 | orchestrator | Saturday 27 September 2025 21:41:35 +0000 (0:00:00.677) 0:02:51.146 **** 2025-09-27 21:41:53.627674 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:53.627684 | orchestrator | 2025-09-27 21:41:53.627694 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-09-27 21:41:53.627703 | orchestrator | 2025-09-27 21:41:53.627716 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-09-27 21:41:53.627733 | orchestrator | Saturday 27 September 2025 21:41:37 +0000 (0:00:02.438) 0:02:53.585 **** 2025-09-27 21:41:53.627749 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:41:53.627764 | orchestrator | 2025-09-27 21:41:53.627780 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-09-27 21:41:53.627790 | orchestrator | Saturday 27 September 2025 21:41:38 +0000 (0:00:00.465) 0:02:54.051 **** 2025-09-27 21:41:53.627800 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:53.627809 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:53.627819 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:53.627828 | orchestrator | 2025-09-27 21:41:53.627838 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-09-27 21:41:53.627847 | orchestrator | Saturday 27 September 2025 21:41:40 +0000 (0:00:02.484) 0:02:56.536 **** 2025-09-27 21:41:53.627857 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:53.627867 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:53.627876 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:53.627886 | orchestrator | 2025-09-27 21:41:53.627895 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-09-27 21:41:53.627905 | orchestrator | Saturday 27 September 2025 21:41:42 +0000 (0:00:02.426) 0:02:58.962 **** 2025-09-27 21:41:53.627914 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:53.627924 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:53.627933 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:53.627943 | orchestrator | 2025-09-27 21:41:53.627952 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-09-27 21:41:53.627962 | orchestrator | Saturday 27 September 2025 21:41:45 +0000 (0:00:02.327) 0:03:01.290 **** 2025-09-27 21:41:53.627971 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:53.627981 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:53.627990 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:41:53.628000 | orchestrator | 2025-09-27 21:41:53.628035 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-09-27 21:41:53.628055 | orchestrator | Saturday 27 September 2025 21:41:47 +0000 (0:00:02.369) 0:03:03.659 **** 2025-09-27 21:41:53.628071 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:41:53.628086 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:41:53.628095 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:41:53.628105 | orchestrator | 2025-09-27 21:41:53.628114 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-09-27 21:41:53.628124 | orchestrator | Saturday 27 September 2025 21:41:50 +0000 (0:00:02.923) 0:03:06.583 **** 2025-09-27 21:41:53.628140 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:41:53.628150 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:41:53.628159 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:41:53.628169 | orchestrator | 2025-09-27 21:41:53.628178 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:41:53.628188 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-27 21:41:53.628198 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-09-27 21:41:53.628209 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-27 21:41:53.628219 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-27 21:41:53.628229 | orchestrator | 2025-09-27 21:41:53.628239 | orchestrator | 2025-09-27 21:41:53.628248 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:41:53.628258 | orchestrator | Saturday 27 September 2025 21:41:50 +0000 (0:00:00.461) 0:03:07.045 **** 2025-09-27 21:41:53.628268 | orchestrator | =============================================================================== 2025-09-27 21:41:53.628277 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 40.57s 2025-09-27 21:41:53.628287 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 37.22s 2025-09-27 21:41:53.628296 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 17.03s 2025-09-27 21:41:53.628311 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.82s 2025-09-27 21:41:53.628320 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.42s 2025-09-27 21:41:53.628330 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 6.97s 2025-09-27 21:41:53.628346 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.90s 2025-09-27 21:41:53.628356 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.53s 2025-09-27 21:41:53.628366 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.98s 2025-09-27 21:41:53.628375 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.00s 2025-09-27 21:41:53.628385 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.97s 2025-09-27 21:41:53.628394 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.92s 2025-09-27 21:41:53.628404 | orchestrator | Check MariaDB service --------------------------------------------------- 2.77s 2025-09-27 21:41:53.628413 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.69s 2025-09-27 21:41:53.628423 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.58s 2025-09-27 21:41:53.628432 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 2.54s 2025-09-27 21:41:53.628442 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.48s 2025-09-27 21:41:53.628452 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.48s 2025-09-27 21:41:53.628461 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.44s 2025-09-27 21:41:53.628471 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.43s 2025-09-27 21:41:53.628480 | orchestrator | 2025-09-27 21:41:53 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:41:53.628490 | orchestrator | 2025-09-27 21:41:53 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:41:53.628507 | orchestrator | 2025-09-27 21:41:53 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:41:56.682143 | orchestrator | 2025-09-27 21:41:56 | INFO  | Task e87772c1-92f1-4a43-9188-255c1951b75c is in state STARTED 2025-09-27 21:41:56.686622 | orchestrator | 2025-09-27 21:41:56 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:41:56.689818 | orchestrator | 2025-09-27 21:41:56 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:41:56.690352 | orchestrator | 2025-09-27 21:41:56 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:41:59.739420 | orchestrator | 2025-09-27 21:41:59 | INFO  | Task e87772c1-92f1-4a43-9188-255c1951b75c is in state STARTED 2025-09-27 21:41:59.741293 | orchestrator | 2025-09-27 21:41:59 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:41:59.743785 | orchestrator | 2025-09-27 21:41:59 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:41:59.743841 | orchestrator | 2025-09-27 21:41:59 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:42:02.811924 | orchestrator | 2025-09-27 21:42:02 | INFO  | Task e87772c1-92f1-4a43-9188-255c1951b75c is in state STARTED 2025-09-27 21:42:02.814421 | orchestrator | 2025-09-27 21:42:02 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:42:02.815712 | orchestrator | 2025-09-27 21:42:02 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:42:02.815961 | orchestrator | 2025-09-27 21:42:02 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:42:05.856121 | orchestrator | 2025-09-27 21:42:05 | INFO  | Task e87772c1-92f1-4a43-9188-255c1951b75c is in state STARTED 2025-09-27 21:42:05.856227 | orchestrator | 2025-09-27 21:42:05 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:42:05.856522 | orchestrator | 2025-09-27 21:42:05 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:42:05.856548 | orchestrator | 2025-09-27 21:42:05 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:42:08.897147 | orchestrator | 2025-09-27 21:42:08 | INFO  | Task e87772c1-92f1-4a43-9188-255c1951b75c is in state STARTED 2025-09-27 21:42:08.897833 | orchestrator | 2025-09-27 21:42:08 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:42:08.899018 | orchestrator | 2025-09-27 21:42:08 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:42:08.899046 | orchestrator | 2025-09-27 21:42:08 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:42:11.936412 | orchestrator | 2025-09-27 21:42:11 | INFO  | Task e87772c1-92f1-4a43-9188-255c1951b75c is in state STARTED 2025-09-27 21:42:11.941137 | orchestrator | 2025-09-27 21:42:11 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:42:11.942309 | orchestrator | 2025-09-27 21:42:11 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:42:11.942675 | orchestrator | 2025-09-27 21:42:11 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:42:14.978085 | orchestrator | 2025-09-27 21:42:14 | INFO  | Task e87772c1-92f1-4a43-9188-255c1951b75c is in state STARTED 2025-09-27 21:42:14.978177 | orchestrator | 2025-09-27 21:42:14 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:42:14.978191 | orchestrator | 2025-09-27 21:42:14 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:42:14.978202 | orchestrator | 2025-09-27 21:42:14 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:42:18.015220 | orchestrator | 2025-09-27 21:42:18 | INFO  | Task e87772c1-92f1-4a43-9188-255c1951b75c is in state STARTED 2025-09-27 21:42:18.015837 | orchestrator | 2025-09-27 21:42:18 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:42:18.017402 | orchestrator | 2025-09-27 21:42:18 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:42:18.017434 | orchestrator | 2025-09-27 21:42:18 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:42:21.049720 | orchestrator | 2025-09-27 21:42:21 | INFO  | Task e87772c1-92f1-4a43-9188-255c1951b75c is in state STARTED 2025-09-27 21:42:21.051382 | orchestrator | 2025-09-27 21:42:21 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:42:21.054169 | orchestrator | 2025-09-27 21:42:21 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:42:21.054195 | orchestrator | 2025-09-27 21:42:21 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:42:24.102164 | orchestrator | 2025-09-27 21:42:24 | INFO  | Task e87772c1-92f1-4a43-9188-255c1951b75c is in state STARTED 2025-09-27 21:42:24.103245 | orchestrator | 2025-09-27 21:42:24 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:42:24.104509 | orchestrator | 2025-09-27 21:42:24 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:42:24.104540 | orchestrator | 2025-09-27 21:42:24 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:42:27.145152 | orchestrator | 2025-09-27 21:42:27 | INFO  | Task e87772c1-92f1-4a43-9188-255c1951b75c is in state STARTED 2025-09-27 21:42:27.145250 | orchestrator | 2025-09-27 21:42:27 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:42:27.146204 | orchestrator | 2025-09-27 21:42:27 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:42:27.146239 | orchestrator | 2025-09-27 21:42:27 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:42:30.185455 | orchestrator | 2025-09-27 21:42:30 | INFO  | Task e87772c1-92f1-4a43-9188-255c1951b75c is in state STARTED 2025-09-27 21:42:30.187364 | orchestrator | 2025-09-27 21:42:30 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:42:30.189026 | orchestrator | 2025-09-27 21:42:30 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:42:30.189067 | orchestrator | 2025-09-27 21:42:30 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:42:33.233618 | orchestrator | 2025-09-27 21:42:33 | INFO  | Task e87772c1-92f1-4a43-9188-255c1951b75c is in state STARTED 2025-09-27 21:42:33.234214 | orchestrator | 2025-09-27 21:42:33 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:42:33.235811 | orchestrator | 2025-09-27 21:42:33 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:42:33.235834 | orchestrator | 2025-09-27 21:42:33 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:42:36.278506 | orchestrator | 2025-09-27 21:42:36 | INFO  | Task e87772c1-92f1-4a43-9188-255c1951b75c is in state STARTED 2025-09-27 21:42:36.280287 | orchestrator | 2025-09-27 21:42:36 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:42:36.282322 | orchestrator | 2025-09-27 21:42:36 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:42:36.282401 | orchestrator | 2025-09-27 21:42:36 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:42:39.319267 | orchestrator | 2025-09-27 21:42:39 | INFO  | Task e87772c1-92f1-4a43-9188-255c1951b75c is in state STARTED 2025-09-27 21:42:39.320804 | orchestrator | 2025-09-27 21:42:39 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:42:39.322934 | orchestrator | 2025-09-27 21:42:39 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:42:39.323176 | orchestrator | 2025-09-27 21:42:39 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:42:42.368018 | orchestrator | 2025-09-27 21:42:42 | INFO  | Task e87772c1-92f1-4a43-9188-255c1951b75c is in state STARTED 2025-09-27 21:42:42.369690 | orchestrator | 2025-09-27 21:42:42 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:42:42.371676 | orchestrator | 2025-09-27 21:42:42 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:42:42.371709 | orchestrator | 2025-09-27 21:42:42 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:42:45.413871 | orchestrator | 2025-09-27 21:42:45 | INFO  | Task e87772c1-92f1-4a43-9188-255c1951b75c is in state STARTED 2025-09-27 21:42:45.415490 | orchestrator | 2025-09-27 21:42:45 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:42:45.417542 | orchestrator | 2025-09-27 21:42:45 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:42:45.417568 | orchestrator | 2025-09-27 21:42:45 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:42:48.464879 | orchestrator | 2025-09-27 21:42:48 | INFO  | Task e87772c1-92f1-4a43-9188-255c1951b75c is in state STARTED 2025-09-27 21:42:48.466864 | orchestrator | 2025-09-27 21:42:48 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:42:48.468372 | orchestrator | 2025-09-27 21:42:48 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:42:48.468421 | orchestrator | 2025-09-27 21:42:48 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:42:51.504675 | orchestrator | 2025-09-27 21:42:51 | INFO  | Task e87772c1-92f1-4a43-9188-255c1951b75c is in state STARTED 2025-09-27 21:42:51.505127 | orchestrator | 2025-09-27 21:42:51 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:42:51.506472 | orchestrator | 2025-09-27 21:42:51 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:42:51.506510 | orchestrator | 2025-09-27 21:42:51 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:42:54.562112 | orchestrator | 2025-09-27 21:42:54 | INFO  | Task e87772c1-92f1-4a43-9188-255c1951b75c is in state STARTED 2025-09-27 21:42:54.565355 | orchestrator | 2025-09-27 21:42:54 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:42:54.567268 | orchestrator | 2025-09-27 21:42:54 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:42:54.567322 | orchestrator | 2025-09-27 21:42:54 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:42:57.611643 | orchestrator | 2025-09-27 21:42:57 | INFO  | Task e87772c1-92f1-4a43-9188-255c1951b75c is in state STARTED 2025-09-27 21:42:57.613177 | orchestrator | 2025-09-27 21:42:57 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:42:57.614819 | orchestrator | 2025-09-27 21:42:57 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:42:57.614863 | orchestrator | 2025-09-27 21:42:57 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:43:00.663078 | orchestrator | 2025-09-27 21:43:00 | INFO  | Task e87772c1-92f1-4a43-9188-255c1951b75c is in state STARTED 2025-09-27 21:43:00.664649 | orchestrator | 2025-09-27 21:43:00 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:43:00.667552 | orchestrator | 2025-09-27 21:43:00 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:43:00.667823 | orchestrator | 2025-09-27 21:43:00 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:43:03.716107 | orchestrator | 2025-09-27 21:43:03 | INFO  | Task e87772c1-92f1-4a43-9188-255c1951b75c is in state STARTED 2025-09-27 21:43:03.717412 | orchestrator | 2025-09-27 21:43:03 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:43:03.719447 | orchestrator | 2025-09-27 21:43:03 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:43:03.719492 | orchestrator | 2025-09-27 21:43:03 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:43:06.756203 | orchestrator | 2025-09-27 21:43:06 | INFO  | Task e87772c1-92f1-4a43-9188-255c1951b75c is in state STARTED 2025-09-27 21:43:06.757323 | orchestrator | 2025-09-27 21:43:06 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:43:06.759871 | orchestrator | 2025-09-27 21:43:06 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:43:06.759933 | orchestrator | 2025-09-27 21:43:06 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:43:09.801665 | orchestrator | 2025-09-27 21:43:09 | INFO  | Task e87772c1-92f1-4a43-9188-255c1951b75c is in state STARTED 2025-09-27 21:43:09.803926 | orchestrator | 2025-09-27 21:43:09 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:43:09.805724 | orchestrator | 2025-09-27 21:43:09 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:43:09.805828 | orchestrator | 2025-09-27 21:43:09 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:43:12.849811 | orchestrator | 2025-09-27 21:43:12 | INFO  | Task e87772c1-92f1-4a43-9188-255c1951b75c is in state STARTED 2025-09-27 21:43:12.850689 | orchestrator | 2025-09-27 21:43:12 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:43:12.852773 | orchestrator | 2025-09-27 21:43:12 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:43:12.852865 | orchestrator | 2025-09-27 21:43:12 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:43:15.907483 | orchestrator | 2025-09-27 21:43:15 | INFO  | Task e87772c1-92f1-4a43-9188-255c1951b75c is in state STARTED 2025-09-27 21:43:15.909948 | orchestrator | 2025-09-27 21:43:15 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:43:15.911728 | orchestrator | 2025-09-27 21:43:15 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state STARTED 2025-09-27 21:43:15.911830 | orchestrator | 2025-09-27 21:43:15 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:43:18.968954 | orchestrator | 2025-09-27 21:43:18 | INFO  | Task e87772c1-92f1-4a43-9188-255c1951b75c is in state STARTED 2025-09-27 21:43:18.970161 | orchestrator | 2025-09-27 21:43:18 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:43:18.974155 | orchestrator | 2025-09-27 21:43:18 | INFO  | Task 74b5e04c-211b-4887-a5f7-4a2d06766cb4 is in state SUCCESS 2025-09-27 21:43:18.976040 | orchestrator | 2025-09-27 21:43:18.976080 | orchestrator | 2025-09-27 21:43:18.976093 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-09-27 21:43:18.976105 | orchestrator | 2025-09-27 21:43:18.976116 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-27 21:43:18.976128 | orchestrator | Saturday 27 September 2025 21:41:04 +0000 (0:00:00.543) 0:00:00.543 **** 2025-09-27 21:43:18.976170 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:43:18.976184 | orchestrator | 2025-09-27 21:43:18.976195 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-27 21:43:18.976206 | orchestrator | Saturday 27 September 2025 21:41:04 +0000 (0:00:00.553) 0:00:01.096 **** 2025-09-27 21:43:18.976217 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:43:18.976229 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:43:18.976240 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:43:18.976251 | orchestrator | 2025-09-27 21:43:18.976262 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-27 21:43:18.976272 | orchestrator | Saturday 27 September 2025 21:41:05 +0000 (0:00:00.621) 0:00:01.717 **** 2025-09-27 21:43:18.976284 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:43:18.976294 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:43:18.976305 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:43:18.976316 | orchestrator | 2025-09-27 21:43:18.976326 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-27 21:43:18.976337 | orchestrator | Saturday 27 September 2025 21:41:05 +0000 (0:00:00.270) 0:00:01.988 **** 2025-09-27 21:43:18.976348 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:43:18.976359 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:43:18.976369 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:43:18.976380 | orchestrator | 2025-09-27 21:43:18.976391 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-27 21:43:18.976402 | orchestrator | Saturday 27 September 2025 21:41:06 +0000 (0:00:00.850) 0:00:02.838 **** 2025-09-27 21:43:18.976412 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:43:18.976424 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:43:18.976435 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:43:18.976445 | orchestrator | 2025-09-27 21:43:18.976456 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-27 21:43:18.976466 | orchestrator | Saturday 27 September 2025 21:41:06 +0000 (0:00:00.327) 0:00:03.165 **** 2025-09-27 21:43:18.976477 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:43:18.976488 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:43:18.976498 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:43:18.976509 | orchestrator | 2025-09-27 21:43:18.976520 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-27 21:43:18.976530 | orchestrator | Saturday 27 September 2025 21:41:07 +0000 (0:00:00.313) 0:00:03.478 **** 2025-09-27 21:43:18.976541 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:43:18.976552 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:43:18.976562 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:43:18.976573 | orchestrator | 2025-09-27 21:43:18.976584 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-27 21:43:18.976611 | orchestrator | Saturday 27 September 2025 21:41:07 +0000 (0:00:00.314) 0:00:03.792 **** 2025-09-27 21:43:18.977420 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:43:18.977455 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:43:18.977467 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:43:18.977478 | orchestrator | 2025-09-27 21:43:18.977489 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-27 21:43:18.977500 | orchestrator | Saturday 27 September 2025 21:41:07 +0000 (0:00:00.517) 0:00:04.310 **** 2025-09-27 21:43:18.977511 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:43:18.977521 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:43:18.977532 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:43:18.977543 | orchestrator | 2025-09-27 21:43:18.977554 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-27 21:43:18.977564 | orchestrator | Saturday 27 September 2025 21:41:08 +0000 (0:00:00.309) 0:00:04.620 **** 2025-09-27 21:43:18.977575 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-27 21:43:18.977586 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-27 21:43:18.977611 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-27 21:43:18.977622 | orchestrator | 2025-09-27 21:43:18.977633 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-27 21:43:18.977644 | orchestrator | Saturday 27 September 2025 21:41:08 +0000 (0:00:00.715) 0:00:05.335 **** 2025-09-27 21:43:18.977654 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:43:18.977665 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:43:18.977696 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:43:18.977707 | orchestrator | 2025-09-27 21:43:18.977718 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-27 21:43:18.977729 | orchestrator | Saturday 27 September 2025 21:41:09 +0000 (0:00:00.427) 0:00:05.762 **** 2025-09-27 21:43:18.977741 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-27 21:43:18.977752 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-27 21:43:18.977764 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-27 21:43:18.977775 | orchestrator | 2025-09-27 21:43:18.977786 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-27 21:43:18.977797 | orchestrator | Saturday 27 September 2025 21:41:11 +0000 (0:00:02.195) 0:00:07.958 **** 2025-09-27 21:43:18.977809 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-27 21:43:18.977820 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-27 21:43:18.977832 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-27 21:43:18.977843 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:43:18.977855 | orchestrator | 2025-09-27 21:43:18.977867 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-27 21:43:18.977922 | orchestrator | Saturday 27 September 2025 21:41:11 +0000 (0:00:00.383) 0:00:08.341 **** 2025-09-27 21:43:18.977939 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.977954 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.977965 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.977997 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:43:18.978009 | orchestrator | 2025-09-27 21:43:18.978071 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-27 21:43:18.978084 | orchestrator | Saturday 27 September 2025 21:41:12 +0000 (0:00:00.818) 0:00:09.160 **** 2025-09-27 21:43:18.978100 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.978115 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.978147 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.978160 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:43:18.978172 | orchestrator | 2025-09-27 21:43:18.978184 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-27 21:43:18.978197 | orchestrator | Saturday 27 September 2025 21:41:12 +0000 (0:00:00.176) 0:00:09.337 **** 2025-09-27 21:43:18.978212 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '5135a170238e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-27 21:41:10.002817', 'end': '2025-09-27 21:41:10.044918', 'delta': '0:00:00.042101', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5135a170238e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-09-27 21:43:18.978229 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '694baa15a6ab', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-27 21:41:10.741690', 'end': '2025-09-27 21:41:10.788643', 'delta': '0:00:00.046953', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['694baa15a6ab'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-09-27 21:43:18.978281 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'ec2b4d0f1d48', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-27 21:41:11.372265', 'end': '2025-09-27 21:41:11.419024', 'delta': '0:00:00.046759', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['ec2b4d0f1d48'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-09-27 21:43:18.978295 | orchestrator | 2025-09-27 21:43:18.978308 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-27 21:43:18.978320 | orchestrator | Saturday 27 September 2025 21:41:13 +0000 (0:00:00.387) 0:00:09.724 **** 2025-09-27 21:43:18.978331 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:43:18.978343 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:43:18.978355 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:43:18.978367 | orchestrator | 2025-09-27 21:43:18.978379 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-27 21:43:18.978391 | orchestrator | Saturday 27 September 2025 21:41:13 +0000 (0:00:00.444) 0:00:10.169 **** 2025-09-27 21:43:18.978403 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-09-27 21:43:18.978415 | orchestrator | 2025-09-27 21:43:18.978426 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-27 21:43:18.978437 | orchestrator | Saturday 27 September 2025 21:41:15 +0000 (0:00:01.711) 0:00:11.880 **** 2025-09-27 21:43:18.978458 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:43:18.978469 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:43:18.978480 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:43:18.978491 | orchestrator | 2025-09-27 21:43:18.978502 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-27 21:43:18.978513 | orchestrator | Saturday 27 September 2025 21:41:15 +0000 (0:00:00.306) 0:00:12.187 **** 2025-09-27 21:43:18.978523 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:43:18.978534 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:43:18.978545 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:43:18.978555 | orchestrator | 2025-09-27 21:43:18.978566 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-27 21:43:18.978577 | orchestrator | Saturday 27 September 2025 21:41:16 +0000 (0:00:00.404) 0:00:12.592 **** 2025-09-27 21:43:18.978588 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:43:18.978598 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:43:18.978609 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:43:18.978620 | orchestrator | 2025-09-27 21:43:18.978631 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-27 21:43:18.978647 | orchestrator | Saturday 27 September 2025 21:41:16 +0000 (0:00:00.499) 0:00:13.091 **** 2025-09-27 21:43:18.978658 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:43:18.978669 | orchestrator | 2025-09-27 21:43:18.978680 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-27 21:43:18.978690 | orchestrator | Saturday 27 September 2025 21:41:16 +0000 (0:00:00.153) 0:00:13.245 **** 2025-09-27 21:43:18.978701 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:43:18.978712 | orchestrator | 2025-09-27 21:43:18.978723 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-27 21:43:18.978734 | orchestrator | Saturday 27 September 2025 21:41:17 +0000 (0:00:00.286) 0:00:13.531 **** 2025-09-27 21:43:18.978744 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:43:18.978755 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:43:18.978766 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:43:18.978777 | orchestrator | 2025-09-27 21:43:18.978787 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-27 21:43:18.978798 | orchestrator | Saturday 27 September 2025 21:41:17 +0000 (0:00:00.288) 0:00:13.819 **** 2025-09-27 21:43:18.978809 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:43:18.978820 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:43:18.978831 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:43:18.978841 | orchestrator | 2025-09-27 21:43:18.978852 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-27 21:43:18.978863 | orchestrator | Saturday 27 September 2025 21:41:17 +0000 (0:00:00.322) 0:00:14.142 **** 2025-09-27 21:43:18.978874 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:43:18.978885 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:43:18.978895 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:43:18.978906 | orchestrator | 2025-09-27 21:43:18.978917 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-27 21:43:18.978928 | orchestrator | Saturday 27 September 2025 21:41:18 +0000 (0:00:00.507) 0:00:14.650 **** 2025-09-27 21:43:18.978938 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:43:18.978949 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:43:18.978960 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:43:18.978970 | orchestrator | 2025-09-27 21:43:18.979048 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-27 21:43:18.979068 | orchestrator | Saturday 27 September 2025 21:41:18 +0000 (0:00:00.312) 0:00:14.962 **** 2025-09-27 21:43:18.979095 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:43:18.979117 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:43:18.979136 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:43:18.979153 | orchestrator | 2025-09-27 21:43:18.979185 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-27 21:43:18.979204 | orchestrator | Saturday 27 September 2025 21:41:18 +0000 (0:00:00.308) 0:00:15.270 **** 2025-09-27 21:43:18.979221 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:43:18.979240 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:43:18.979260 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:43:18.979275 | orchestrator | 2025-09-27 21:43:18.979291 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-27 21:43:18.979345 | orchestrator | Saturday 27 September 2025 21:41:19 +0000 (0:00:00.312) 0:00:15.582 **** 2025-09-27 21:43:18.979356 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:43:18.979366 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:43:18.979375 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:43:18.979385 | orchestrator | 2025-09-27 21:43:18.979394 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-27 21:43:18.979404 | orchestrator | Saturday 27 September 2025 21:41:19 +0000 (0:00:00.490) 0:00:16.073 **** 2025-09-27 21:43:18.979416 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c2ef8475--4f12--50de--ab79--c841a7bfbe3d-osd--block--c2ef8475--4f12--50de--ab79--c841a7bfbe3d', 'dm-uuid-LVM-Yghi5PMNzAUKKjcwKKhcMFpFez4MUhPBir7d0NnBE5iYUlseHvYe1FXazX5do9YF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-27 21:43:18.979428 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e5968580--5dd1--5a87--a5e5--bc9ba69f72d9-osd--block--e5968580--5dd1--5a87--a5e5--bc9ba69f72d9', 'dm-uuid-LVM-a8TT4Fcz9cVddTCRwzsEcymcLVTFc3bZ8ys5WH9K8T3LrHUjRmzCBXWOjsnEYYz1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-27 21:43:18.979439 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:43:18.979456 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:43:18.979467 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:43:18.979477 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:43:18.979496 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:43:18.979547 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:43:18.979569 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:43:18.979585 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:43:18.979613 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163', 'scsi-SQEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163-part1', 'scsi-SQEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163-part14', 'scsi-SQEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163-part15', 'scsi-SQEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163-part16', 'scsi-SQEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 21:43:18.979635 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c2ef8475--4f12--50de--ab79--c841a7bfbe3d-osd--block--c2ef8475--4f12--50de--ab79--c841a7bfbe3d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0SFHLR-LyxF-MjbY-BOat-2ikE-xt70-CdNgyn', 'scsi-0QEMU_QEMU_HARDDISK_a92b9860-302a-4dfa-9a5b-f64375177990', 'scsi-SQEMU_QEMU_HARDDISK_a92b9860-302a-4dfa-9a5b-f64375177990'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 21:43:18.979704 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--e5968580--5dd1--5a87--a5e5--bc9ba69f72d9-osd--block--e5968580--5dd1--5a87--a5e5--bc9ba69f72d9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iO0jIg-SzmY-BSev-S2q5-gH03-Ib3M-x0DxmD', 'scsi-0QEMU_QEMU_HARDDISK_1d27bfee-58fc-413a-aadf-ce708d3c762a', 'scsi-SQEMU_QEMU_HARDDISK_1d27bfee-58fc-413a-aadf-ce708d3c762a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 21:43:18.979717 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--de74169a--f069--5642--ad17--f2f17c514bb2-osd--block--de74169a--f069--5642--ad17--f2f17c514bb2', 'dm-uuid-LVM-TpcckaZuTFD5gkHuNcp7iF3EMSpgI9UrRGozGpTgwStCtvXggsirr1Ly7MW5iEIG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-27 21:43:18.979728 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57fc99d7-7aa7-4d8e-bac5-79cb8f64eb7c', 'scsi-SQEMU_QEMU_HARDDISK_57fc99d7-7aa7-4d8e-bac5-79cb8f64eb7c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 21:43:18.979743 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--364a105c--f104--5917--80d0--e8f8560ea5f8-osd--block--364a105c--f104--5917--80d0--e8f8560ea5f8', 'dm-uuid-LVM-KLufK8gEI52UL8f1HkAEnlIB2Iyl14XcNHjIye9KHHf8fqvbtKYFAj5B5hAUzsj0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-27 21:43:18.979753 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-27-20-48-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 21:43:18.979770 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:43:18.979781 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:43:18.979826 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:43:18.979847 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:43:18.979864 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:43:18.979881 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:43:18.979898 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:43:18.979915 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:43:18.979947 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:43:18.980035 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b', 'scsi-SQEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b-part1', 'scsi-SQEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b-part14', 'scsi-SQEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b-part15', 'scsi-SQEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b-part16', 'scsi-SQEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 21:43:18.980073 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--de74169a--f069--5642--ad17--f2f17c514bb2-osd--block--de74169a--f069--5642--ad17--f2f17c514bb2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BePNm6-V9ka-X1Ve-uLWx-HM3W-H6mq-AnWkOg', 'scsi-0QEMU_QEMU_HARDDISK_13607e9c-06d4-4fec-b04d-15514859d6a0', 'scsi-SQEMU_QEMU_HARDDISK_13607e9c-06d4-4fec-b04d-15514859d6a0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 21:43:18.980092 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--364a105c--f104--5917--80d0--e8f8560ea5f8-osd--block--364a105c--f104--5917--80d0--e8f8560ea5f8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PLLIxm-Y0We-XJu1-LMUL-FZxs-bv50-rRS5Cm', 'scsi-0QEMU_QEMU_HARDDISK_00c7ac73-0c66-4cdd-8f79-353d0386cdac', 'scsi-SQEMU_QEMU_HARDDISK_00c7ac73-0c66-4cdd-8f79-353d0386cdac'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 21:43:18.980116 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f7aa810c-750c-432b-b053-2bc489acb9c9', 'scsi-SQEMU_QEMU_HARDDISK_f7aa810c-750c-432b-b053-2bc489acb9c9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 21:43:18.980132 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-27-20-48-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 21:43:18.980161 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:43:18.980179 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5f61d8e2--65b7--57ca--8dcb--2a964e525246-osd--block--5f61d8e2--65b7--57ca--8dcb--2a964e525246', 'dm-uuid-LVM-MOQAAAGC1svH5a50BTbOijG6FagohEA30d3qo9pPllDFPkpEmlvOIWdjpqFvdxlS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-27 21:43:18.980206 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2897d5b9--8afd--5dc0--8795--bd1d3af2960f-osd--block--2897d5b9--8afd--5dc0--8795--bd1d3af2960f', 'dm-uuid-LVM-gOSH3V1rtxklooScdBzcM6WK8O8LWin1AYZPOij2fw1LxMKg8zO1yIAVilLyzUkT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-27 21:43:18.980217 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:43:18.980228 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:43:18.980237 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:43:18.980247 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:43:18.980262 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:43:18.980279 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:43:18.980289 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:43:18.980301 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-27 21:43:18.980330 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f', 'scsi-SQEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f-part1', 'scsi-SQEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f-part14', 'scsi-SQEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f-part15', 'scsi-SQEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f-part16', 'scsi-SQEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 21:43:18.980354 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--5f61d8e2--65b7--57ca--8dcb--2a964e525246-osd--block--5f61d8e2--65b7--57ca--8dcb--2a964e525246'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wdC30I-L5Vz-aNsq-jnLp-jccK-lSHx-2y24Y9', 'scsi-0QEMU_QEMU_HARDDISK_3ec8be80-0eed-4819-876a-b80c0ef8150e', 'scsi-SQEMU_QEMU_HARDDISK_3ec8be80-0eed-4819-876a-b80c0ef8150e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 21:43:18.980379 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2897d5b9--8afd--5dc0--8795--bd1d3af2960f-osd--block--2897d5b9--8afd--5dc0--8795--bd1d3af2960f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-phfmu8-ETUC-rUWt-fenU-2AXO-cMKJ-jfcXFD', 'scsi-0QEMU_QEMU_HARDDISK_89df2119-9fed-4bd7-9779-2bc26187d4ad', 'scsi-SQEMU_QEMU_HARDDISK_89df2119-9fed-4bd7-9779-2bc26187d4ad'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 21:43:18.980395 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb7d096e-2368-48a2-bece-3fcee17790fa', 'scsi-SQEMU_QEMU_HARDDISK_fb7d096e-2368-48a2-bece-3fcee17790fa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 21:43:18.980418 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-27-20-48-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-27 21:43:18.980433 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:43:18.980448 | orchestrator | 2025-09-27 21:43:18.980462 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-27 21:43:18.980477 | orchestrator | Saturday 27 September 2025 21:41:20 +0000 (0:00:00.630) 0:00:16.703 **** 2025-09-27 21:43:18.980491 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c2ef8475--4f12--50de--ab79--c841a7bfbe3d-osd--block--c2ef8475--4f12--50de--ab79--c841a7bfbe3d', 'dm-uuid-LVM-Yghi5PMNzAUKKjcwKKhcMFpFez4MUhPBir7d0NnBE5iYUlseHvYe1FXazX5do9YF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.980508 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e5968580--5dd1--5a87--a5e5--bc9ba69f72d9-osd--block--e5968580--5dd1--5a87--a5e5--bc9ba69f72d9', 'dm-uuid-LVM-a8TT4Fcz9cVddTCRwzsEcymcLVTFc3bZ8ys5WH9K8T3LrHUjRmzCBXWOjsnEYYz1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.980542 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.980556 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.980569 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.980591 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--de74169a--f069--5642--ad17--f2f17c514bb2-osd--block--de74169a--f069--5642--ad17--f2f17c514bb2', 'dm-uuid-LVM-TpcckaZuTFD5gkHuNcp7iF3EMSpgI9UrRGozGpTgwStCtvXggsirr1Ly7MW5iEIG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.980605 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.980619 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--364a105c--f104--5917--80d0--e8f8560ea5f8-osd--block--364a105c--f104--5917--80d0--e8f8560ea5f8', 'dm-uuid-LVM-KLufK8gEI52UL8f1HkAEnlIB2Iyl14XcNHjIye9KHHf8fqvbtKYFAj5B5hAUzsj0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.980646 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.980660 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.980674 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.980695 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.980710 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.980723 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.980745 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.980774 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163', 'scsi-SQEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163-part1', 'scsi-SQEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163-part14', 'scsi-SQEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163-part15', 'scsi-SQEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163-part16', 'scsi-SQEMU_QEMU_HARDDISK_51e128d3-1914-4762-adc7-9d4270f02163-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.980789 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.980804 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c2ef8475--4f12--50de--ab79--c841a7bfbe3d-osd--block--c2ef8475--4f12--50de--ab79--c841a7bfbe3d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0SFHLR-LyxF-MjbY-BOat-2ikE-xt70-CdNgyn', 'scsi-0QEMU_QEMU_HARDDISK_a92b9860-302a-4dfa-9a5b-f64375177990', 'scsi-SQEMU_QEMU_HARDDISK_a92b9860-302a-4dfa-9a5b-f64375177990'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.980834 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.980850 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--e5968580--5dd1--5a87--a5e5--bc9ba69f72d9-osd--block--e5968580--5dd1--5a87--a5e5--bc9ba69f72d9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iO0jIg-SzmY-BSev-S2q5-gH03-Ib3M-x0DxmD', 'scsi-0QEMU_QEMU_HARDDISK_1d27bfee-58fc-413a-aadf-ce708d3c762a', 'scsi-SQEMU_QEMU_HARDDISK_1d27bfee-58fc-413a-aadf-ce708d3c762a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.980871 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_57fc99d7-7aa7-4d8e-bac5-79cb8f64eb7c', 'scsi-SQEMU_QEMU_HARDDISK_57fc99d7-7aa7-4d8e-bac5-79cb8f64eb7c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.980885 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5f61d8e2--65b7--57ca--8dcb--2a964e525246-osd--block--5f61d8e2--65b7--57ca--8dcb--2a964e525246', 'dm-uuid-LVM-MOQAAAGC1svH5a50BTbOijG6FagohEA30d3qo9pPllDFPkpEmlvOIWdjpqFvdxlS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.980898 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.980924 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-27-20-48-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.980938 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2897d5b9--8afd--5dc0--8795--bd1d3af2960f-osd--block--2897d5b9--8afd--5dc0--8795--bd1d3af2960f', 'dm-uuid-LVM-gOSH3V1rtxklooScdBzcM6WK8O8LWin1AYZPOij2fw1LxMKg8zO1yIAVilLyzUkT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.980952 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:43:18.980965 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.981014 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.981029 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.981043 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.981073 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b', 'scsi-SQEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b-part1', 'scsi-SQEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b-part14', 'scsi-SQEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b-part15', 'scsi-SQEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b-part16', 'scsi-SQEMU_QEMU_HARDDISK_a6b1147b-6bd3-47da-8a82-1ade68ae9e5b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.981095 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.981111 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--de74169a--f069--5642--ad17--f2f17c514bb2-osd--block--de74169a--f069--5642--ad17--f2f17c514bb2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BePNm6-V9ka-X1Ve-uLWx-HM3W-H6mq-AnWkOg', 'scsi-0QEMU_QEMU_HARDDISK_13607e9c-06d4-4fec-b04d-15514859d6a0', 'scsi-SQEMU_QEMU_HARDDISK_13607e9c-06d4-4fec-b04d-15514859d6a0'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.981134 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.981152 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--364a105c--f104--5917--80d0--e8f8560ea5f8-osd--block--364a105c--f104--5917--80d0--e8f8560ea5f8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PLLIxm-Y0We-XJu1-LMUL-FZxs-bv50-rRS5Cm', 'scsi-0QEMU_QEMU_HARDDISK_00c7ac73-0c66-4cdd-8f79-353d0386cdac', 'scsi-SQEMU_QEMU_HARDDISK_00c7ac73-0c66-4cdd-8f79-353d0386cdac'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.981166 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.981180 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.981200 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f7aa810c-750c-432b-b053-2bc489acb9c9', 'scsi-SQEMU_QEMU_HARDDISK_f7aa810c-750c-432b-b053-2bc489acb9c9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.981210 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.981224 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-27-20-48-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.981242 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:43:18.981255 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.981278 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f', 'scsi-SQEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f-part1', 'scsi-SQEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f-part14', 'scsi-SQEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f-part15', 'scsi-SQEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f-part16', 'scsi-SQEMU_QEMU_HARDDISK_96ed894f-b916-4151-acb7-f0197c26307f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.981300 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--5f61d8e2--65b7--57ca--8dcb--2a964e525246-osd--block--5f61d8e2--65b7--57ca--8dcb--2a964e525246'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wdC30I-L5Vz-aNsq-jnLp-jccK-lSHx-2y24Y9', 'scsi-0QEMU_QEMU_HARDDISK_3ec8be80-0eed-4819-876a-b80c0ef8150e', 'scsi-SQEMU_QEMU_HARDDISK_3ec8be80-0eed-4819-876a-b80c0ef8150e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.981319 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2897d5b9--8afd--5dc0--8795--bd1d3af2960f-osd--block--2897d5b9--8afd--5dc0--8795--bd1d3af2960f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-phfmu8-ETUC-rUWt-fenU-2AXO-cMKJ-jfcXFD', 'scsi-0QEMU_QEMU_HARDDISK_89df2119-9fed-4bd7-9779-2bc26187d4ad', 'scsi-SQEMU_QEMU_HARDDISK_89df2119-9fed-4bd7-9779-2bc26187d4ad'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.981333 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fb7d096e-2368-48a2-bece-3fcee17790fa', 'scsi-SQEMU_QEMU_HARDDISK_fb7d096e-2368-48a2-bece-3fcee17790fa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.981354 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-27-20-48-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-27 21:43:18.981370 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:43:18.981383 | orchestrator | 2025-09-27 21:43:18.981397 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-27 21:43:18.981420 | orchestrator | Saturday 27 September 2025 21:41:20 +0000 (0:00:00.584) 0:00:17.288 **** 2025-09-27 21:43:18.981429 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:43:18.981437 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:43:18.981445 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:43:18.981453 | orchestrator | 2025-09-27 21:43:18.981461 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-27 21:43:18.981469 | orchestrator | Saturday 27 September 2025 21:41:21 +0000 (0:00:00.753) 0:00:18.042 **** 2025-09-27 21:43:18.981477 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:43:18.981485 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:43:18.981492 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:43:18.981500 | orchestrator | 2025-09-27 21:43:18.981508 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-27 21:43:18.981516 | orchestrator | Saturday 27 September 2025 21:41:22 +0000 (0:00:00.493) 0:00:18.535 **** 2025-09-27 21:43:18.981524 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:43:18.981532 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:43:18.981540 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:43:18.981547 | orchestrator | 2025-09-27 21:43:18.981555 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-27 21:43:18.981563 | orchestrator | Saturday 27 September 2025 21:41:23 +0000 (0:00:01.481) 0:00:20.016 **** 2025-09-27 21:43:18.981571 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:43:18.981579 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:43:18.981587 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:43:18.981595 | orchestrator | 2025-09-27 21:43:18.981603 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-27 21:43:18.981611 | orchestrator | Saturday 27 September 2025 21:41:23 +0000 (0:00:00.279) 0:00:20.296 **** 2025-09-27 21:43:18.981619 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:43:18.981626 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:43:18.981634 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:43:18.981642 | orchestrator | 2025-09-27 21:43:18.981650 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-27 21:43:18.981658 | orchestrator | Saturday 27 September 2025 21:41:24 +0000 (0:00:00.407) 0:00:20.703 **** 2025-09-27 21:43:18.981666 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:43:18.981674 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:43:18.981682 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:43:18.981690 | orchestrator | 2025-09-27 21:43:18.981697 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-27 21:43:18.981709 | orchestrator | Saturday 27 September 2025 21:41:24 +0000 (0:00:00.522) 0:00:21.225 **** 2025-09-27 21:43:18.981718 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-27 21:43:18.981730 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-27 21:43:18.981744 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-27 21:43:18.981757 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-27 21:43:18.981771 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-27 21:43:18.981785 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-27 21:43:18.981798 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-27 21:43:18.981811 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-27 21:43:18.981825 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-27 21:43:18.981838 | orchestrator | 2025-09-27 21:43:18.981851 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-27 21:43:18.981863 | orchestrator | Saturday 27 September 2025 21:41:25 +0000 (0:00:00.883) 0:00:22.109 **** 2025-09-27 21:43:18.981871 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-27 21:43:18.981879 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-27 21:43:18.981887 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-27 21:43:18.981903 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:43:18.981911 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-27 21:43:18.981919 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-27 21:43:18.981927 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-27 21:43:18.981940 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:43:18.981953 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-27 21:43:18.981966 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-27 21:43:18.981999 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-27 21:43:18.982043 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:43:18.982054 | orchestrator | 2025-09-27 21:43:18.982062 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-27 21:43:18.982070 | orchestrator | Saturday 27 September 2025 21:41:26 +0000 (0:00:00.371) 0:00:22.480 **** 2025-09-27 21:43:18.982078 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:43:18.982086 | orchestrator | 2025-09-27 21:43:18.982095 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-27 21:43:18.982105 | orchestrator | Saturday 27 September 2025 21:41:26 +0000 (0:00:00.726) 0:00:23.207 **** 2025-09-27 21:43:18.982119 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:43:18.982131 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:43:18.982145 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:43:18.982157 | orchestrator | 2025-09-27 21:43:18.982178 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-27 21:43:18.982193 | orchestrator | Saturday 27 September 2025 21:41:27 +0000 (0:00:00.337) 0:00:23.544 **** 2025-09-27 21:43:18.982205 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:43:18.982219 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:43:18.982232 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:43:18.982246 | orchestrator | 2025-09-27 21:43:18.982259 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-27 21:43:18.982272 | orchestrator | Saturday 27 September 2025 21:41:27 +0000 (0:00:00.331) 0:00:23.876 **** 2025-09-27 21:43:18.982285 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:43:18.982298 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:43:18.982311 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:43:18.982324 | orchestrator | 2025-09-27 21:43:18.982338 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-27 21:43:18.982351 | orchestrator | Saturday 27 September 2025 21:41:27 +0000 (0:00:00.305) 0:00:24.182 **** 2025-09-27 21:43:18.982364 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:43:18.982378 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:43:18.982390 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:43:18.982403 | orchestrator | 2025-09-27 21:43:18.982417 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-27 21:43:18.982429 | orchestrator | Saturday 27 September 2025 21:41:28 +0000 (0:00:00.590) 0:00:24.772 **** 2025-09-27 21:43:18.982442 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-27 21:43:18.982455 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-27 21:43:18.982468 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-27 21:43:18.982481 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:43:18.982494 | orchestrator | 2025-09-27 21:43:18.982508 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-27 21:43:18.982520 | orchestrator | Saturday 27 September 2025 21:41:28 +0000 (0:00:00.366) 0:00:25.139 **** 2025-09-27 21:43:18.982533 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-27 21:43:18.982546 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-27 21:43:18.982559 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-27 21:43:18.982582 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:43:18.982595 | orchestrator | 2025-09-27 21:43:18.982608 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-27 21:43:18.982621 | orchestrator | Saturday 27 September 2025 21:41:29 +0000 (0:00:00.375) 0:00:25.514 **** 2025-09-27 21:43:18.982634 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-27 21:43:18.982647 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-27 21:43:18.982660 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-27 21:43:18.982673 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:43:18.982686 | orchestrator | 2025-09-27 21:43:18.982698 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-27 21:43:18.982712 | orchestrator | Saturday 27 September 2025 21:41:29 +0000 (0:00:00.358) 0:00:25.872 **** 2025-09-27 21:43:18.982724 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:43:18.982738 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:43:18.982750 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:43:18.982763 | orchestrator | 2025-09-27 21:43:18.982776 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-27 21:43:18.982789 | orchestrator | Saturday 27 September 2025 21:41:29 +0000 (0:00:00.306) 0:00:26.179 **** 2025-09-27 21:43:18.982802 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-27 21:43:18.982816 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-27 21:43:18.982828 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-27 21:43:18.982841 | orchestrator | 2025-09-27 21:43:18.982854 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-27 21:43:18.982865 | orchestrator | Saturday 27 September 2025 21:41:30 +0000 (0:00:00.534) 0:00:26.713 **** 2025-09-27 21:43:18.982878 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-27 21:43:18.982889 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-27 21:43:18.982901 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-27 21:43:18.982913 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-27 21:43:18.982927 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-27 21:43:18.982940 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-27 21:43:18.982953 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-27 21:43:18.982964 | orchestrator | 2025-09-27 21:43:18.982972 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-27 21:43:18.983025 | orchestrator | Saturday 27 September 2025 21:41:31 +0000 (0:00:00.999) 0:00:27.713 **** 2025-09-27 21:43:18.983034 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-27 21:43:18.983042 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-27 21:43:18.983050 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-27 21:43:18.983058 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-27 21:43:18.983066 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-27 21:43:18.983074 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-27 21:43:18.983082 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-27 21:43:18.983090 | orchestrator | 2025-09-27 21:43:18.983104 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-09-27 21:43:18.983112 | orchestrator | Saturday 27 September 2025 21:41:33 +0000 (0:00:01.963) 0:00:29.676 **** 2025-09-27 21:43:18.983120 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:43:18.983128 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:43:18.983147 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-09-27 21:43:18.983155 | orchestrator | 2025-09-27 21:43:18.983163 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-09-27 21:43:18.983170 | orchestrator | Saturday 27 September 2025 21:41:33 +0000 (0:00:00.387) 0:00:30.063 **** 2025-09-27 21:43:18.983177 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-27 21:43:18.983185 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-27 21:43:18.983224 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-27 21:43:18.983231 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-27 21:43:18.983238 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-27 21:43:18.983245 | orchestrator | 2025-09-27 21:43:18.983252 | orchestrator | TASK [generate keys] *********************************************************** 2025-09-27 21:43:18.983262 | orchestrator | Saturday 27 September 2025 21:42:19 +0000 (0:00:45.949) 0:01:16.013 **** 2025-09-27 21:43:18.983268 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 21:43:18.983275 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 21:43:18.983282 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 21:43:18.983289 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 21:43:18.983295 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 21:43:18.983302 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 21:43:18.983308 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-09-27 21:43:18.983315 | orchestrator | 2025-09-27 21:43:18.983322 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-09-27 21:43:18.983329 | orchestrator | Saturday 27 September 2025 21:42:45 +0000 (0:00:26.179) 0:01:42.192 **** 2025-09-27 21:43:18.983335 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 21:43:18.983342 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 21:43:18.983349 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 21:43:18.983355 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 21:43:18.983362 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 21:43:18.983368 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 21:43:18.983375 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-27 21:43:18.983381 | orchestrator | 2025-09-27 21:43:18.983388 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-09-27 21:43:18.983401 | orchestrator | Saturday 27 September 2025 21:42:58 +0000 (0:00:12.955) 0:01:55.147 **** 2025-09-27 21:43:18.983408 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 21:43:18.983414 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-27 21:43:18.983421 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-27 21:43:18.983427 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 21:43:18.983434 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-27 21:43:18.983440 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-27 21:43:18.983452 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 21:43:18.983459 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-27 21:43:18.983467 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-27 21:43:18.983479 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 21:43:18.983490 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-27 21:43:18.983501 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-27 21:43:18.983512 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 21:43:18.983524 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-27 21:43:18.983535 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-27 21:43:18.983546 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-27 21:43:18.983556 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-27 21:43:18.983562 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-27 21:43:18.983569 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-09-27 21:43:18.983576 | orchestrator | 2025-09-27 21:43:18.983582 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:43:18.983589 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-09-27 21:43:18.983598 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-27 21:43:18.983605 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-27 21:43:18.983612 | orchestrator | 2025-09-27 21:43:18.983618 | orchestrator | 2025-09-27 21:43:18.983625 | orchestrator | 2025-09-27 21:43:18.983631 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:43:18.983638 | orchestrator | Saturday 27 September 2025 21:43:16 +0000 (0:00:17.898) 0:02:13.046 **** 2025-09-27 21:43:18.983644 | orchestrator | =============================================================================== 2025-09-27 21:43:18.983651 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.95s 2025-09-27 21:43:18.983658 | orchestrator | generate keys ---------------------------------------------------------- 26.18s 2025-09-27 21:43:18.983668 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.90s 2025-09-27 21:43:18.983675 | orchestrator | get keys from monitors ------------------------------------------------- 12.96s 2025-09-27 21:43:18.983681 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.20s 2025-09-27 21:43:18.983688 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.96s 2025-09-27 21:43:18.983695 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.71s 2025-09-27 21:43:18.983710 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 1.48s 2025-09-27 21:43:18.983722 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.00s 2025-09-27 21:43:18.983732 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.88s 2025-09-27 21:43:18.983744 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.85s 2025-09-27 21:43:18.983755 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.82s 2025-09-27 21:43:18.983766 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.75s 2025-09-27 21:43:18.983777 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.73s 2025-09-27 21:43:18.983785 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.72s 2025-09-27 21:43:18.983791 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.63s 2025-09-27 21:43:18.983798 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.62s 2025-09-27 21:43:18.983804 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.59s 2025-09-27 21:43:18.983811 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.59s 2025-09-27 21:43:18.983818 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.55s 2025-09-27 21:43:18.983824 | orchestrator | 2025-09-27 21:43:18 | INFO  | Task 56a2581c-b3ed-47e5-afa0-ca92d7eef41b is in state STARTED 2025-09-27 21:43:18.983831 | orchestrator | 2025-09-27 21:43:18 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:43:22.024612 | orchestrator | 2025-09-27 21:43:22 | INFO  | Task e87772c1-92f1-4a43-9188-255c1951b75c is in state STARTED 2025-09-27 21:43:22.026470 | orchestrator | 2025-09-27 21:43:22 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:43:22.029640 | orchestrator | 2025-09-27 21:43:22 | INFO  | Task 56a2581c-b3ed-47e5-afa0-ca92d7eef41b is in state STARTED 2025-09-27 21:43:22.029886 | orchestrator | 2025-09-27 21:43:22 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:43:25.078312 | orchestrator | 2025-09-27 21:43:25 | INFO  | Task e87772c1-92f1-4a43-9188-255c1951b75c is in state STARTED 2025-09-27 21:43:25.080305 | orchestrator | 2025-09-27 21:43:25 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:43:25.081728 | orchestrator | 2025-09-27 21:43:25 | INFO  | Task 56a2581c-b3ed-47e5-afa0-ca92d7eef41b is in state STARTED 2025-09-27 21:43:25.081769 | orchestrator | 2025-09-27 21:43:25 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:43:28.134114 | orchestrator | 2025-09-27 21:43:28 | INFO  | Task e87772c1-92f1-4a43-9188-255c1951b75c is in state STARTED 2025-09-27 21:43:28.135172 | orchestrator | 2025-09-27 21:43:28 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:43:28.137222 | orchestrator | 2025-09-27 21:43:28 | INFO  | Task 56a2581c-b3ed-47e5-afa0-ca92d7eef41b is in state STARTED 2025-09-27 21:43:28.137276 | orchestrator | 2025-09-27 21:43:28 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:43:31.191450 | orchestrator | 2025-09-27 21:43:31 | INFO  | Task e87772c1-92f1-4a43-9188-255c1951b75c is in state STARTED 2025-09-27 21:43:31.194641 | orchestrator | 2025-09-27 21:43:31 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:43:31.196739 | orchestrator | 2025-09-27 21:43:31 | INFO  | Task 56a2581c-b3ed-47e5-afa0-ca92d7eef41b is in state STARTED 2025-09-27 21:43:31.196768 | orchestrator | 2025-09-27 21:43:31 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:43:34.251551 | orchestrator | 2025-09-27 21:43:34 | INFO  | Task e87772c1-92f1-4a43-9188-255c1951b75c is in state STARTED 2025-09-27 21:43:34.252670 | orchestrator | 2025-09-27 21:43:34 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:43:34.255678 | orchestrator | 2025-09-27 21:43:34 | INFO  | Task 56a2581c-b3ed-47e5-afa0-ca92d7eef41b is in state STARTED 2025-09-27 21:43:34.255723 | orchestrator | 2025-09-27 21:43:34 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:43:37.302972 | orchestrator | 2025-09-27 21:43:37 | INFO  | Task e87772c1-92f1-4a43-9188-255c1951b75c is in state STARTED 2025-09-27 21:43:37.305835 | orchestrator | 2025-09-27 21:43:37 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:43:37.308205 | orchestrator | 2025-09-27 21:43:37 | INFO  | Task 56a2581c-b3ed-47e5-afa0-ca92d7eef41b is in state STARTED 2025-09-27 21:43:37.308596 | orchestrator | 2025-09-27 21:43:37 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:43:40.354403 | orchestrator | 2025-09-27 21:43:40 | INFO  | Task e87772c1-92f1-4a43-9188-255c1951b75c is in state SUCCESS 2025-09-27 21:43:40.355797 | orchestrator | 2025-09-27 21:43:40.356148 | orchestrator | 2025-09-27 21:43:40.356171 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 21:43:40.356185 | orchestrator | 2025-09-27 21:43:40.356197 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 21:43:40.356208 | orchestrator | Saturday 27 September 2025 21:41:55 +0000 (0:00:00.261) 0:00:00.261 **** 2025-09-27 21:43:40.356220 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:43:40.356232 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:43:40.356243 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:43:40.356254 | orchestrator | 2025-09-27 21:43:40.356266 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 21:43:40.356277 | orchestrator | Saturday 27 September 2025 21:41:55 +0000 (0:00:00.308) 0:00:00.570 **** 2025-09-27 21:43:40.356287 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-09-27 21:43:40.356299 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-09-27 21:43:40.356310 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-09-27 21:43:40.356320 | orchestrator | 2025-09-27 21:43:40.356331 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-09-27 21:43:40.356342 | orchestrator | 2025-09-27 21:43:40.356353 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-27 21:43:40.356364 | orchestrator | Saturday 27 September 2025 21:41:56 +0000 (0:00:00.431) 0:00:01.001 **** 2025-09-27 21:43:40.356375 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:43:40.356387 | orchestrator | 2025-09-27 21:43:40.356398 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-09-27 21:43:40.356409 | orchestrator | Saturday 27 September 2025 21:41:56 +0000 (0:00:00.525) 0:00:01.527 **** 2025-09-27 21:43:40.356426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-27 21:43:40.356508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-27 21:43:40.356524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-27 21:43:40.356545 | orchestrator | 2025-09-27 21:43:40.356556 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-09-27 21:43:40.356567 | orchestrator | Saturday 27 September 2025 21:41:57 +0000 (0:00:01.186) 0:00:02.714 **** 2025-09-27 21:43:40.356583 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:43:40.356594 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:43:40.356605 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:43:40.356616 | orchestrator | 2025-09-27 21:43:40.356627 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-27 21:43:40.356638 | orchestrator | Saturday 27 September 2025 21:41:58 +0000 (0:00:00.442) 0:00:03.156 **** 2025-09-27 21:43:40.356651 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-27 21:43:40.356672 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-27 21:43:40.356685 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-09-27 21:43:40.356697 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-09-27 21:43:40.356709 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-09-27 21:43:40.356721 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-09-27 21:43:40.356736 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-09-27 21:43:40.356754 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-09-27 21:43:40.356772 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-27 21:43:40.356791 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-27 21:43:40.356809 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-09-27 21:43:40.356830 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-09-27 21:43:40.356849 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-09-27 21:43:40.356871 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-09-27 21:43:40.356891 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-09-27 21:43:40.356909 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-09-27 21:43:40.356929 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-27 21:43:40.356962 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-27 21:43:40.357010 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-09-27 21:43:40.357028 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-09-27 21:43:40.357039 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-09-27 21:43:40.357050 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-09-27 21:43:40.357061 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-09-27 21:43:40.357071 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-09-27 21:43:40.357084 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-09-27 21:43:40.357097 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-09-27 21:43:40.357107 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-09-27 21:43:40.357118 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-09-27 21:43:40.357129 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-09-27 21:43:40.357140 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-09-27 21:43:40.357159 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-09-27 21:43:40.357176 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-09-27 21:43:40.357193 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-09-27 21:43:40.357213 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-09-27 21:43:40.357231 | orchestrator | 2025-09-27 21:43:40.357249 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-27 21:43:40.357276 | orchestrator | Saturday 27 September 2025 21:41:59 +0000 (0:00:00.772) 0:00:03.928 **** 2025-09-27 21:43:40.357296 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:43:40.357314 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:43:40.357331 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:43:40.357342 | orchestrator | 2025-09-27 21:43:40.357353 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-27 21:43:40.357364 | orchestrator | Saturday 27 September 2025 21:41:59 +0000 (0:00:00.324) 0:00:04.253 **** 2025-09-27 21:43:40.357375 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:43:40.357386 | orchestrator | 2025-09-27 21:43:40.357407 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-27 21:43:40.357418 | orchestrator | Saturday 27 September 2025 21:41:59 +0000 (0:00:00.155) 0:00:04.409 **** 2025-09-27 21:43:40.357429 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:43:40.357440 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:43:40.357450 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:43:40.357461 | orchestrator | 2025-09-27 21:43:40.357471 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-27 21:43:40.357492 | orchestrator | Saturday 27 September 2025 21:42:00 +0000 (0:00:00.464) 0:00:04.873 **** 2025-09-27 21:43:40.357503 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:43:40.357513 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:43:40.357524 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:43:40.357534 | orchestrator | 2025-09-27 21:43:40.357545 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-27 21:43:40.357556 | orchestrator | Saturday 27 September 2025 21:42:00 +0000 (0:00:00.330) 0:00:05.203 **** 2025-09-27 21:43:40.357566 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:43:40.357577 | orchestrator | 2025-09-27 21:43:40.357588 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-27 21:43:40.357598 | orchestrator | Saturday 27 September 2025 21:42:00 +0000 (0:00:00.155) 0:00:05.359 **** 2025-09-27 21:43:40.357609 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:43:40.357620 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:43:40.357631 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:43:40.357641 | orchestrator | 2025-09-27 21:43:40.357652 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-27 21:43:40.357662 | orchestrator | Saturday 27 September 2025 21:42:00 +0000 (0:00:00.343) 0:00:05.702 **** 2025-09-27 21:43:40.357696 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:43:40.357707 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:43:40.357718 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:43:40.357729 | orchestrator | 2025-09-27 21:43:40.357740 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-27 21:43:40.357750 | orchestrator | Saturday 27 September 2025 21:42:01 +0000 (0:00:00.300) 0:00:06.003 **** 2025-09-27 21:43:40.357761 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:43:40.357772 | orchestrator | 2025-09-27 21:43:40.357783 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-27 21:43:40.357805 | orchestrator | Saturday 27 September 2025 21:42:01 +0000 (0:00:00.116) 0:00:06.119 **** 2025-09-27 21:43:40.357816 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:43:40.357827 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:43:40.357837 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:43:40.357848 | orchestrator | 2025-09-27 21:43:40.357859 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-27 21:43:40.357869 | orchestrator | Saturday 27 September 2025 21:42:02 +0000 (0:00:00.730) 0:00:06.850 **** 2025-09-27 21:43:40.357880 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:43:40.357891 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:43:40.358200 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:43:40.358235 | orchestrator | 2025-09-27 21:43:40.358253 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-27 21:43:40.358272 | orchestrator | Saturday 27 September 2025 21:42:02 +0000 (0:00:00.326) 0:00:07.177 **** 2025-09-27 21:43:40.358283 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:43:40.358294 | orchestrator | 2025-09-27 21:43:40.358305 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-27 21:43:40.358316 | orchestrator | Saturday 27 September 2025 21:42:02 +0000 (0:00:00.165) 0:00:07.342 **** 2025-09-27 21:43:40.358327 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:43:40.358337 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:43:40.358349 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:43:40.358359 | orchestrator | 2025-09-27 21:43:40.358370 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-27 21:43:40.358381 | orchestrator | Saturday 27 September 2025 21:42:02 +0000 (0:00:00.312) 0:00:07.655 **** 2025-09-27 21:43:40.358392 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:43:40.358402 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:43:40.358413 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:43:40.358423 | orchestrator | 2025-09-27 21:43:40.358434 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-27 21:43:40.358458 | orchestrator | Saturday 27 September 2025 21:42:03 +0000 (0:00:00.334) 0:00:07.990 **** 2025-09-27 21:43:40.358468 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:43:40.358479 | orchestrator | 2025-09-27 21:43:40.358490 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-27 21:43:40.358501 | orchestrator | Saturday 27 September 2025 21:42:03 +0000 (0:00:00.337) 0:00:08.328 **** 2025-09-27 21:43:40.358511 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:43:40.358522 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:43:40.358532 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:43:40.358543 | orchestrator | 2025-09-27 21:43:40.358553 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-27 21:43:40.358564 | orchestrator | Saturday 27 September 2025 21:42:03 +0000 (0:00:00.324) 0:00:08.652 **** 2025-09-27 21:43:40.358575 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:43:40.358585 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:43:40.358596 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:43:40.358607 | orchestrator | 2025-09-27 21:43:40.358617 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-27 21:43:40.358628 | orchestrator | Saturday 27 September 2025 21:42:04 +0000 (0:00:00.320) 0:00:08.972 **** 2025-09-27 21:43:40.358638 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:43:40.358649 | orchestrator | 2025-09-27 21:43:40.358669 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-27 21:43:40.358688 | orchestrator | Saturday 27 September 2025 21:42:04 +0000 (0:00:00.143) 0:00:09.115 **** 2025-09-27 21:43:40.358706 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:43:40.358723 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:43:40.358741 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:43:40.358758 | orchestrator | 2025-09-27 21:43:40.358776 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-27 21:43:40.358811 | orchestrator | Saturday 27 September 2025 21:42:04 +0000 (0:00:00.295) 0:00:09.410 **** 2025-09-27 21:43:40.358831 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:43:40.358851 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:43:40.358870 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:43:40.358889 | orchestrator | 2025-09-27 21:43:40.358906 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-27 21:43:40.358920 | orchestrator | Saturday 27 September 2025 21:42:05 +0000 (0:00:00.526) 0:00:09.936 **** 2025-09-27 21:43:40.358931 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:43:40.358943 | orchestrator | 2025-09-27 21:43:40.358955 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-27 21:43:40.358966 | orchestrator | Saturday 27 September 2025 21:42:05 +0000 (0:00:00.141) 0:00:10.078 **** 2025-09-27 21:43:40.359018 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:43:40.359031 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:43:40.359042 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:43:40.359054 | orchestrator | 2025-09-27 21:43:40.359066 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-27 21:43:40.359078 | orchestrator | Saturday 27 September 2025 21:42:05 +0000 (0:00:00.306) 0:00:10.384 **** 2025-09-27 21:43:40.359089 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:43:40.359101 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:43:40.359113 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:43:40.359125 | orchestrator | 2025-09-27 21:43:40.359136 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-27 21:43:40.359149 | orchestrator | Saturday 27 September 2025 21:42:05 +0000 (0:00:00.321) 0:00:10.706 **** 2025-09-27 21:43:40.359160 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:43:40.359171 | orchestrator | 2025-09-27 21:43:40.359181 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-27 21:43:40.359192 | orchestrator | Saturday 27 September 2025 21:42:05 +0000 (0:00:00.133) 0:00:10.840 **** 2025-09-27 21:43:40.359203 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:43:40.359224 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:43:40.359235 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:43:40.359245 | orchestrator | 2025-09-27 21:43:40.359256 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-27 21:43:40.359267 | orchestrator | Saturday 27 September 2025 21:42:06 +0000 (0:00:00.299) 0:00:11.140 **** 2025-09-27 21:43:40.359277 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:43:40.359288 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:43:40.359299 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:43:40.359309 | orchestrator | 2025-09-27 21:43:40.359320 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-27 21:43:40.359331 | orchestrator | Saturday 27 September 2025 21:42:06 +0000 (0:00:00.615) 0:00:11.756 **** 2025-09-27 21:43:40.359341 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:43:40.359352 | orchestrator | 2025-09-27 21:43:40.359362 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-27 21:43:40.359373 | orchestrator | Saturday 27 September 2025 21:42:07 +0000 (0:00:00.133) 0:00:11.889 **** 2025-09-27 21:43:40.359384 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:43:40.359395 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:43:40.359405 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:43:40.359416 | orchestrator | 2025-09-27 21:43:40.359427 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-27 21:43:40.359437 | orchestrator | Saturday 27 September 2025 21:42:07 +0000 (0:00:00.296) 0:00:12.186 **** 2025-09-27 21:43:40.359448 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:43:40.359459 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:43:40.359470 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:43:40.359480 | orchestrator | 2025-09-27 21:43:40.359491 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-27 21:43:40.359502 | orchestrator | Saturday 27 September 2025 21:42:07 +0000 (0:00:00.332) 0:00:12.518 **** 2025-09-27 21:43:40.359512 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:43:40.359523 | orchestrator | 2025-09-27 21:43:40.359534 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-27 21:43:40.359544 | orchestrator | Saturday 27 September 2025 21:42:07 +0000 (0:00:00.131) 0:00:12.650 **** 2025-09-27 21:43:40.359555 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:43:40.359566 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:43:40.359577 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:43:40.359587 | orchestrator | 2025-09-27 21:43:40.359598 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-09-27 21:43:40.359609 | orchestrator | Saturday 27 September 2025 21:42:08 +0000 (0:00:00.503) 0:00:13.153 **** 2025-09-27 21:43:40.359619 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:43:40.359630 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:43:40.359641 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:43:40.359651 | orchestrator | 2025-09-27 21:43:40.359662 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-09-27 21:43:40.359672 | orchestrator | Saturday 27 September 2025 21:42:10 +0000 (0:00:01.868) 0:00:15.022 **** 2025-09-27 21:43:40.359683 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-27 21:43:40.359694 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-27 21:43:40.359705 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-27 21:43:40.359720 | orchestrator | 2025-09-27 21:43:40.359740 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-09-27 21:43:40.359759 | orchestrator | Saturday 27 September 2025 21:42:12 +0000 (0:00:02.075) 0:00:17.097 **** 2025-09-27 21:43:40.359787 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-27 21:43:40.359810 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-27 21:43:40.359840 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-27 21:43:40.359861 | orchestrator | 2025-09-27 21:43:40.359880 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-09-27 21:43:40.359914 | orchestrator | Saturday 27 September 2025 21:42:14 +0000 (0:00:02.212) 0:00:19.310 **** 2025-09-27 21:43:40.359928 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-27 21:43:40.359939 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-27 21:43:40.359950 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-27 21:43:40.359961 | orchestrator | 2025-09-27 21:43:40.359971 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-09-27 21:43:40.360013 | orchestrator | Saturday 27 September 2025 21:42:16 +0000 (0:00:01.748) 0:00:21.059 **** 2025-09-27 21:43:40.360025 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:43:40.360036 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:43:40.360046 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:43:40.360057 | orchestrator | 2025-09-27 21:43:40.360068 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-09-27 21:43:40.360078 | orchestrator | Saturday 27 September 2025 21:42:16 +0000 (0:00:00.253) 0:00:21.313 **** 2025-09-27 21:43:40.360089 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:43:40.360103 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:43:40.360121 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:43:40.360139 | orchestrator | 2025-09-27 21:43:40.360156 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-27 21:43:40.360174 | orchestrator | Saturday 27 September 2025 21:42:16 +0000 (0:00:00.247) 0:00:21.560 **** 2025-09-27 21:43:40.360192 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:43:40.360210 | orchestrator | 2025-09-27 21:43:40.360229 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-09-27 21:43:40.360248 | orchestrator | Saturday 27 September 2025 21:42:17 +0000 (0:00:00.531) 0:00:22.091 **** 2025-09-27 21:43:40.360272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-27 21:43:40.360324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-27 21:43:40.360351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-27 21:43:40.360370 | orchestrator | 2025-09-27 21:43:40.360381 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-09-27 21:43:40.360392 | orchestrator | Saturday 27 September 2025 21:42:18 +0000 (0:00:01.553) 0:00:23.645 **** 2025-09-27 21:43:40.360413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-27 21:43:40.360426 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:43:40.360450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-27 21:43:40.360471 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:43:40.360482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-27 21:43:40.360494 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:43:40.360505 | orchestrator | 2025-09-27 21:43:40.360516 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-09-27 21:43:40.360527 | orchestrator | Saturday 27 September 2025 21:42:19 +0000 (0:00:00.539) 0:00:24.185 **** 2025-09-27 21:43:40.360552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-27 21:43:40.360571 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:43:40.360582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-27 21:43:40.360600 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:43:40.360626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-27 21:43:40.360638 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:43:40.360649 | orchestrator | 2025-09-27 21:43:40.360660 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-09-27 21:43:40.360671 | orchestrator | Saturday 27 September 2025 21:42:20 +0000 (0:00:00.746) 0:00:24.932 **** 2025-09-27 21:43:40.360682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-27 21:43:40.360715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-27 21:43:40.360729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-27 21:43:40.360747 | orchestrator | 2025-09-27 21:43:40.360758 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-27 21:43:40.360768 | orchestrator | Saturday 27 September 2025 21:42:21 +0000 (0:00:01.502) 0:00:26.434 **** 2025-09-27 21:43:40.360779 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:43:40.360790 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:43:40.360801 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:43:40.360811 | orchestrator | 2025-09-27 21:43:40.360827 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-27 21:43:40.360838 | orchestrator | Saturday 27 September 2025 21:42:21 +0000 (0:00:00.246) 0:00:26.680 **** 2025-09-27 21:43:40.360849 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:43:40.360859 | orchestrator | 2025-09-27 21:43:40.360870 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-09-27 21:43:40.360886 | orchestrator | Saturday 27 September 2025 21:42:22 +0000 (0:00:00.462) 0:00:27.143 **** 2025-09-27 21:43:40.360898 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:43:40.360908 | orchestrator | 2025-09-27 21:43:40.360919 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-09-27 21:43:40.360930 | orchestrator | Saturday 27 September 2025 21:42:24 +0000 (0:00:02.287) 0:00:29.431 **** 2025-09-27 21:43:40.360940 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:43:40.360951 | orchestrator | 2025-09-27 21:43:40.360962 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-09-27 21:43:40.360973 | orchestrator | Saturday 27 September 2025 21:42:27 +0000 (0:00:02.629) 0:00:32.060 **** 2025-09-27 21:43:40.361017 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:43:40.361037 | orchestrator | 2025-09-27 21:43:40.361054 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-27 21:43:40.361073 | orchestrator | Saturday 27 September 2025 21:42:45 +0000 (0:00:18.200) 0:00:50.261 **** 2025-09-27 21:43:40.361091 | orchestrator | 2025-09-27 21:43:40.361109 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-27 21:43:40.361128 | orchestrator | Saturday 27 September 2025 21:42:45 +0000 (0:00:00.078) 0:00:50.340 **** 2025-09-27 21:43:40.361147 | orchestrator | 2025-09-27 21:43:40.361166 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-27 21:43:40.361185 | orchestrator | Saturday 27 September 2025 21:42:45 +0000 (0:00:00.066) 0:00:50.406 **** 2025-09-27 21:43:40.361198 | orchestrator | 2025-09-27 21:43:40.361209 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-09-27 21:43:40.361220 | orchestrator | Saturday 27 September 2025 21:42:45 +0000 (0:00:00.075) 0:00:50.482 **** 2025-09-27 21:43:40.361230 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:43:40.361241 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:43:40.361252 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:43:40.361272 | orchestrator | 2025-09-27 21:43:40.361283 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:43:40.361294 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-09-27 21:43:40.361306 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-27 21:43:40.361317 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-27 21:43:40.361328 | orchestrator | 2025-09-27 21:43:40.361339 | orchestrator | 2025-09-27 21:43:40.361349 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:43:40.361360 | orchestrator | Saturday 27 September 2025 21:43:38 +0000 (0:00:52.561) 0:01:43.043 **** 2025-09-27 21:43:40.361371 | orchestrator | =============================================================================== 2025-09-27 21:43:40.361382 | orchestrator | horizon : Restart horizon container ------------------------------------ 52.56s 2025-09-27 21:43:40.361392 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 18.20s 2025-09-27 21:43:40.361403 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.63s 2025-09-27 21:43:40.361413 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.29s 2025-09-27 21:43:40.361424 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.21s 2025-09-27 21:43:40.361434 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.08s 2025-09-27 21:43:40.361445 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.87s 2025-09-27 21:43:40.361455 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.75s 2025-09-27 21:43:40.361465 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.55s 2025-09-27 21:43:40.361476 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.50s 2025-09-27 21:43:40.361487 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.19s 2025-09-27 21:43:40.361497 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.77s 2025-09-27 21:43:40.361508 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.75s 2025-09-27 21:43:40.361518 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.73s 2025-09-27 21:43:40.361529 | orchestrator | horizon : Update policy file name --------------------------------------- 0.62s 2025-09-27 21:43:40.361539 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.54s 2025-09-27 21:43:40.361550 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.53s 2025-09-27 21:43:40.361560 | orchestrator | horizon : Update policy file name --------------------------------------- 0.53s 2025-09-27 21:43:40.361571 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.53s 2025-09-27 21:43:40.361582 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.50s 2025-09-27 21:43:40.361599 | orchestrator | 2025-09-27 21:43:40 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:43:40.361610 | orchestrator | 2025-09-27 21:43:40 | INFO  | Task 56a2581c-b3ed-47e5-afa0-ca92d7eef41b is in state STARTED 2025-09-27 21:43:40.361621 | orchestrator | 2025-09-27 21:43:40 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:43:43.406400 | orchestrator | 2025-09-27 21:43:43 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:43:43.406493 | orchestrator | 2025-09-27 21:43:43 | INFO  | Task 56a2581c-b3ed-47e5-afa0-ca92d7eef41b is in state STARTED 2025-09-27 21:43:43.406759 | orchestrator | 2025-09-27 21:43:43 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:43:46.461864 | orchestrator | 2025-09-27 21:43:46 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:43:46.464028 | orchestrator | 2025-09-27 21:43:46 | INFO  | Task 56a2581c-b3ed-47e5-afa0-ca92d7eef41b is in state STARTED 2025-09-27 21:43:46.464105 | orchestrator | 2025-09-27 21:43:46 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:43:49.517563 | orchestrator | 2025-09-27 21:43:49 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:43:49.519262 | orchestrator | 2025-09-27 21:43:49 | INFO  | Task 56a2581c-b3ed-47e5-afa0-ca92d7eef41b is in state STARTED 2025-09-27 21:43:49.519551 | orchestrator | 2025-09-27 21:43:49 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:43:52.559415 | orchestrator | 2025-09-27 21:43:52 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:43:52.560794 | orchestrator | 2025-09-27 21:43:52 | INFO  | Task 56a2581c-b3ed-47e5-afa0-ca92d7eef41b is in state STARTED 2025-09-27 21:43:52.560825 | orchestrator | 2025-09-27 21:43:52 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:43:55.614433 | orchestrator | 2025-09-27 21:43:55 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:43:55.615323 | orchestrator | 2025-09-27 21:43:55 | INFO  | Task 56a2581c-b3ed-47e5-afa0-ca92d7eef41b is in state STARTED 2025-09-27 21:43:55.615354 | orchestrator | 2025-09-27 21:43:55 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:43:58.670720 | orchestrator | 2025-09-27 21:43:58 | INFO  | Task f01dfe7c-07f2-43c1-b432-7fabdce5ad8d is in state STARTED 2025-09-27 21:43:58.672480 | orchestrator | 2025-09-27 21:43:58 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:43:58.674245 | orchestrator | 2025-09-27 21:43:58 | INFO  | Task 56a2581c-b3ed-47e5-afa0-ca92d7eef41b is in state SUCCESS 2025-09-27 21:43:58.674531 | orchestrator | 2025-09-27 21:43:58 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:44:01.719129 | orchestrator | 2025-09-27 21:44:01 | INFO  | Task f01dfe7c-07f2-43c1-b432-7fabdce5ad8d is in state STARTED 2025-09-27 21:44:01.719235 | orchestrator | 2025-09-27 21:44:01 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:44:01.719245 | orchestrator | 2025-09-27 21:44:01 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:44:04.761460 | orchestrator | 2025-09-27 21:44:04 | INFO  | Task f01dfe7c-07f2-43c1-b432-7fabdce5ad8d is in state STARTED 2025-09-27 21:44:04.762573 | orchestrator | 2025-09-27 21:44:04 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:44:04.762611 | orchestrator | 2025-09-27 21:44:04 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:44:07.805334 | orchestrator | 2025-09-27 21:44:07 | INFO  | Task f01dfe7c-07f2-43c1-b432-7fabdce5ad8d is in state STARTED 2025-09-27 21:44:07.805872 | orchestrator | 2025-09-27 21:44:07 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:44:07.805905 | orchestrator | 2025-09-27 21:44:07 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:44:10.844921 | orchestrator | 2025-09-27 21:44:10 | INFO  | Task f01dfe7c-07f2-43c1-b432-7fabdce5ad8d is in state STARTED 2025-09-27 21:44:10.845934 | orchestrator | 2025-09-27 21:44:10 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:44:10.845955 | orchestrator | 2025-09-27 21:44:10 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:44:13.879512 | orchestrator | 2025-09-27 21:44:13 | INFO  | Task f01dfe7c-07f2-43c1-b432-7fabdce5ad8d is in state STARTED 2025-09-27 21:44:13.880807 | orchestrator | 2025-09-27 21:44:13 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:44:13.880841 | orchestrator | 2025-09-27 21:44:13 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:44:16.925072 | orchestrator | 2025-09-27 21:44:16 | INFO  | Task f01dfe7c-07f2-43c1-b432-7fabdce5ad8d is in state STARTED 2025-09-27 21:44:16.926002 | orchestrator | 2025-09-27 21:44:16 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:44:16.926089 | orchestrator | 2025-09-27 21:44:16 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:44:19.975911 | orchestrator | 2025-09-27 21:44:19 | INFO  | Task f01dfe7c-07f2-43c1-b432-7fabdce5ad8d is in state STARTED 2025-09-27 21:44:19.977807 | orchestrator | 2025-09-27 21:44:19 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:44:19.977842 | orchestrator | 2025-09-27 21:44:19 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:44:23.020808 | orchestrator | 2025-09-27 21:44:23 | INFO  | Task f01dfe7c-07f2-43c1-b432-7fabdce5ad8d is in state STARTED 2025-09-27 21:44:23.020930 | orchestrator | 2025-09-27 21:44:23 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:44:23.020947 | orchestrator | 2025-09-27 21:44:23 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:44:26.057667 | orchestrator | 2025-09-27 21:44:26 | INFO  | Task f01dfe7c-07f2-43c1-b432-7fabdce5ad8d is in state STARTED 2025-09-27 21:44:26.059070 | orchestrator | 2025-09-27 21:44:26 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:44:26.059154 | orchestrator | 2025-09-27 21:44:26 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:44:29.097549 | orchestrator | 2025-09-27 21:44:29 | INFO  | Task f01dfe7c-07f2-43c1-b432-7fabdce5ad8d is in state STARTED 2025-09-27 21:44:29.100234 | orchestrator | 2025-09-27 21:44:29 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:44:29.100341 | orchestrator | 2025-09-27 21:44:29 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:44:32.147178 | orchestrator | 2025-09-27 21:44:32 | INFO  | Task f01dfe7c-07f2-43c1-b432-7fabdce5ad8d is in state STARTED 2025-09-27 21:44:32.149235 | orchestrator | 2025-09-27 21:44:32 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:44:32.149288 | orchestrator | 2025-09-27 21:44:32 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:44:35.201874 | orchestrator | 2025-09-27 21:44:35 | INFO  | Task f01dfe7c-07f2-43c1-b432-7fabdce5ad8d is in state STARTED 2025-09-27 21:44:35.204898 | orchestrator | 2025-09-27 21:44:35 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:44:35.205110 | orchestrator | 2025-09-27 21:44:35 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:44:38.255868 | orchestrator | 2025-09-27 21:44:38 | INFO  | Task f01dfe7c-07f2-43c1-b432-7fabdce5ad8d is in state STARTED 2025-09-27 21:44:38.257857 | orchestrator | 2025-09-27 21:44:38 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:44:38.257911 | orchestrator | 2025-09-27 21:44:38 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:44:41.300453 | orchestrator | 2025-09-27 21:44:41 | INFO  | Task f01dfe7c-07f2-43c1-b432-7fabdce5ad8d is in state STARTED 2025-09-27 21:44:41.303597 | orchestrator | 2025-09-27 21:44:41 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state STARTED 2025-09-27 21:44:41.303680 | orchestrator | 2025-09-27 21:44:41 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:44:44.340052 | orchestrator | 2025-09-27 21:44:44 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:44:44.340163 | orchestrator | 2025-09-27 21:44:44 | INFO  | Task f01dfe7c-07f2-43c1-b432-7fabdce5ad8d is in state STARTED 2025-09-27 21:44:44.344071 | orchestrator | 2025-09-27 21:44:44 | INFO  | Task 7ad39bde-e30e-4d6d-8fa2-2360b6b28727 is in state SUCCESS 2025-09-27 21:44:44.345603 | orchestrator | 2025-09-27 21:44:44.345648 | orchestrator | 2025-09-27 21:44:44.345657 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-09-27 21:44:44.345666 | orchestrator | 2025-09-27 21:44:44.345674 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2025-09-27 21:44:44.345682 | orchestrator | Saturday 27 September 2025 21:43:21 +0000 (0:00:00.188) 0:00:00.188 **** 2025-09-27 21:44:44.345690 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-09-27 21:44:44.345699 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-27 21:44:44.345707 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-27 21:44:44.345715 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-09-27 21:44:44.345738 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-27 21:44:44.345746 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-09-27 21:44:44.345754 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-09-27 21:44:44.345761 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-09-27 21:44:44.345769 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-09-27 21:44:44.345776 | orchestrator | 2025-09-27 21:44:44.345783 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-09-27 21:44:44.345791 | orchestrator | Saturday 27 September 2025 21:43:26 +0000 (0:00:04.904) 0:00:05.093 **** 2025-09-27 21:44:44.345799 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-09-27 21:44:44.345806 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-27 21:44:44.345814 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-27 21:44:44.345821 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-09-27 21:44:44.345886 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-27 21:44:44.345894 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-09-27 21:44:44.345901 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-09-27 21:44:44.346005 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-09-27 21:44:44.346053 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-09-27 21:44:44.346064 | orchestrator | 2025-09-27 21:44:44.346072 | orchestrator | TASK [Create share directory] ************************************************** 2025-09-27 21:44:44.346079 | orchestrator | Saturday 27 September 2025 21:43:30 +0000 (0:00:04.570) 0:00:09.663 **** 2025-09-27 21:44:44.346088 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-27 21:44:44.346096 | orchestrator | 2025-09-27 21:44:44.346103 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-09-27 21:44:44.346111 | orchestrator | Saturday 27 September 2025 21:43:31 +0000 (0:00:01.015) 0:00:10.679 **** 2025-09-27 21:44:44.346141 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-09-27 21:44:44.346149 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-27 21:44:44.346157 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-27 21:44:44.346165 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-09-27 21:44:44.346172 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-27 21:44:44.346179 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-09-27 21:44:44.346579 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-09-27 21:44:44.346595 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-09-27 21:44:44.346603 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-09-27 21:44:44.346610 | orchestrator | 2025-09-27 21:44:44.346617 | orchestrator | TASK [Check if target directories exist] *************************************** 2025-09-27 21:44:44.346625 | orchestrator | Saturday 27 September 2025 21:43:45 +0000 (0:00:14.014) 0:00:24.693 **** 2025-09-27 21:44:44.346632 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2025-09-27 21:44:44.346640 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2025-09-27 21:44:44.346647 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2025-09-27 21:44:44.346654 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2025-09-27 21:44:44.346694 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2025-09-27 21:44:44.346703 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2025-09-27 21:44:44.346710 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2025-09-27 21:44:44.346718 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2025-09-27 21:44:44.346764 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2025-09-27 21:44:44.346773 | orchestrator | 2025-09-27 21:44:44.346781 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-09-27 21:44:44.346788 | orchestrator | Saturday 27 September 2025 21:43:49 +0000 (0:00:03.282) 0:00:27.975 **** 2025-09-27 21:44:44.346797 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-09-27 21:44:44.346813 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-27 21:44:44.346821 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-27 21:44:44.346828 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-09-27 21:44:44.346836 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-27 21:44:44.346843 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-09-27 21:44:44.346850 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-09-27 21:44:44.346858 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-09-27 21:44:44.346865 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-09-27 21:44:44.346873 | orchestrator | 2025-09-27 21:44:44.346880 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:44:44.346888 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:44:44.346897 | orchestrator | 2025-09-27 21:44:44.346904 | orchestrator | 2025-09-27 21:44:44.346924 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:44:44.346931 | orchestrator | Saturday 27 September 2025 21:43:56 +0000 (0:00:06.795) 0:00:34.771 **** 2025-09-27 21:44:44.346939 | orchestrator | =============================================================================== 2025-09-27 21:44:44.346964 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.01s 2025-09-27 21:44:44.346972 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.80s 2025-09-27 21:44:44.346979 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.90s 2025-09-27 21:44:44.346986 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.57s 2025-09-27 21:44:44.346993 | orchestrator | Check if target directories exist --------------------------------------- 3.28s 2025-09-27 21:44:44.347000 | orchestrator | Create share directory -------------------------------------------------- 1.02s 2025-09-27 21:44:44.347008 | orchestrator | 2025-09-27 21:44:44.347015 | orchestrator | 2025-09-27 21:44:44.347022 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 21:44:44.347029 | orchestrator | 2025-09-27 21:44:44.347037 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 21:44:44.347044 | orchestrator | Saturday 27 September 2025 21:41:55 +0000 (0:00:00.270) 0:00:00.270 **** 2025-09-27 21:44:44.347051 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:44:44.347059 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:44:44.347066 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:44:44.347073 | orchestrator | 2025-09-27 21:44:44.347080 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 21:44:44.347088 | orchestrator | Saturday 27 September 2025 21:41:55 +0000 (0:00:00.309) 0:00:00.580 **** 2025-09-27 21:44:44.347095 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-27 21:44:44.347103 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-27 21:44:44.347110 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-27 21:44:44.347118 | orchestrator | 2025-09-27 21:44:44.347125 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-09-27 21:44:44.347132 | orchestrator | 2025-09-27 21:44:44.347139 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-27 21:44:44.347146 | orchestrator | Saturday 27 September 2025 21:41:56 +0000 (0:00:00.454) 0:00:01.035 **** 2025-09-27 21:44:44.347154 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:44:44.347161 | orchestrator | 2025-09-27 21:44:44.347169 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-09-27 21:44:44.347176 | orchestrator | Saturday 27 September 2025 21:41:56 +0000 (0:00:00.537) 0:00:01.573 **** 2025-09-27 21:44:44.347220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 21:44:44.347239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 21:44:44.347255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 21:44:44.347265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-27 21:44:44.347274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-27 21:44:44.347305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-27 21:44:44.347319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-27 21:44:44.347334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-27 21:44:44.347342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-27 21:44:44.347349 | orchestrator | 2025-09-27 21:44:44.347357 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-09-27 21:44:44.347365 | orchestrator | Saturday 27 September 2025 21:41:58 +0000 (0:00:01.964) 0:00:03.537 **** 2025-09-27 21:44:44.347372 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-09-27 21:44:44.347380 | orchestrator | 2025-09-27 21:44:44.347387 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-09-27 21:44:44.347394 | orchestrator | Saturday 27 September 2025 21:41:59 +0000 (0:00:00.849) 0:00:04.387 **** 2025-09-27 21:44:44.347402 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:44:44.347409 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:44:44.347418 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:44:44.347427 | orchestrator | 2025-09-27 21:44:44.347435 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-09-27 21:44:44.347443 | orchestrator | Saturday 27 September 2025 21:42:00 +0000 (0:00:00.513) 0:00:04.900 **** 2025-09-27 21:44:44.347451 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-27 21:44:44.347459 | orchestrator | 2025-09-27 21:44:44.347467 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-27 21:44:44.347475 | orchestrator | Saturday 27 September 2025 21:42:00 +0000 (0:00:00.811) 0:00:05.712 **** 2025-09-27 21:44:44.347483 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:44:44.347491 | orchestrator | 2025-09-27 21:44:44.347499 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-09-27 21:44:44.347507 | orchestrator | Saturday 27 September 2025 21:42:01 +0000 (0:00:00.619) 0:00:06.331 **** 2025-09-27 21:44:44.347520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 21:44:44.347539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 21:44:44.347549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 21:44:44.347559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-27 21:44:44.347568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-27 21:44:44.347576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-27 21:44:44.347599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-27 21:44:44.347613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-27 21:44:44.347621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-27 21:44:44.347630 | orchestrator | 2025-09-27 21:44:44.347639 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-09-27 21:44:44.347647 | orchestrator | Saturday 27 September 2025 21:42:04 +0000 (0:00:03.393) 0:00:09.725 **** 2025-09-27 21:44:44.347656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-27 21:44:44.347665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-27 21:44:44.347682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-27 21:44:44.347692 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:44:44.347705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-27 21:44:44.347715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-27 21:44:44.347723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-27 21:44:44.347732 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:44:44.347741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-27 21:44:44.347759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-27 21:44:44.347772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-27 21:44:44.347781 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:44:44.347789 | orchestrator | 2025-09-27 21:44:44.347797 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-09-27 21:44:44.347806 | orchestrator | Saturday 27 September 2025 21:42:05 +0000 (0:00:00.767) 0:00:10.493 **** 2025-09-27 21:44:44.347817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-27 21:44:44.347826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-27 21:44:44.347833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-27 21:44:44.347841 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:44:44.347854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-27 21:44:44.347866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-27 21:44:44.347878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-27 21:44:44.347886 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:44:44.347893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-27 21:44:44.347901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-27 21:44:44.347914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-27 21:44:44.347921 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:44:44.347929 | orchestrator | 2025-09-27 21:44:44.347936 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-09-27 21:44:44.347960 | orchestrator | Saturday 27 September 2025 21:42:06 +0000 (0:00:00.746) 0:00:11.239 **** 2025-09-27 21:44:44.347976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 21:44:44.347988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 21:44:44.347997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 21:44:44.348010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-27 21:44:44.348018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-27 21:44:44.348030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-27 21:44:44.348042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-27 21:44:44.348050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-27 21:44:44.348057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-27 21:44:44.348065 | orchestrator | 2025-09-27 21:44:44.348072 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-09-27 21:44:44.348080 | orchestrator | Saturday 27 September 2025 21:42:09 +0000 (0:00:03.269) 0:00:14.508 **** 2025-09-27 21:44:44.348093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 21:44:44.348101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-27 21:44:44.348118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 21:44:44.348127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-27 21:44:44.348135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 21:44:44.348151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-27 21:44:44.348159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-27 21:44:44.348170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-27 21:44:44.348182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-27 21:44:44.348190 | orchestrator | 2025-09-27 21:44:44.348198 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-09-27 21:44:44.348205 | orchestrator | Saturday 27 September 2025 21:42:15 +0000 (0:00:05.441) 0:00:19.949 **** 2025-09-27 21:44:44.348212 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:44:44.348220 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:44:44.348227 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:44:44.348234 | orchestrator | 2025-09-27 21:44:44.348242 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-09-27 21:44:44.348249 | orchestrator | Saturday 27 September 2025 21:42:16 +0000 (0:00:01.342) 0:00:21.292 **** 2025-09-27 21:44:44.348256 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:44:44.348264 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:44:44.348271 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:44:44.348279 | orchestrator | 2025-09-27 21:44:44.348286 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-09-27 21:44:44.348293 | orchestrator | Saturday 27 September 2025 21:42:16 +0000 (0:00:00.436) 0:00:21.728 **** 2025-09-27 21:44:44.348301 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:44:44.348315 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:44:44.348322 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:44:44.348330 | orchestrator | 2025-09-27 21:44:44.348337 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-09-27 21:44:44.348345 | orchestrator | Saturday 27 September 2025 21:42:17 +0000 (0:00:00.270) 0:00:21.999 **** 2025-09-27 21:44:44.348352 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:44:44.348359 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:44:44.348367 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:44:44.348374 | orchestrator | 2025-09-27 21:44:44.348381 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-09-27 21:44:44.348389 | orchestrator | Saturday 27 September 2025 21:42:17 +0000 (0:00:00.383) 0:00:22.382 **** 2025-09-27 21:44:44.348396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 21:44:44.348404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-27 21:44:44.348419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 21:44:44.348432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-27 21:44:44.348444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 21:44:44.348452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-27 21:44:44.348460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-27 21:44:44.348468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-27 21:44:44.348480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-27 21:44:44.348487 | orchestrator | 2025-09-27 21:44:44.348495 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-27 21:44:44.348502 | orchestrator | Saturday 27 September 2025 21:42:19 +0000 (0:00:02.330) 0:00:24.713 **** 2025-09-27 21:44:44.348510 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:44:44.348523 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:44:44.348531 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:44:44.348542 | orchestrator | 2025-09-27 21:44:44.348549 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-09-27 21:44:44.348556 | orchestrator | Saturday 27 September 2025 21:42:20 +0000 (0:00:00.271) 0:00:24.985 **** 2025-09-27 21:44:44.348564 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-27 21:44:44.348571 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-27 21:44:44.348579 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-27 21:44:44.348586 | orchestrator | 2025-09-27 21:44:44.348593 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-09-27 21:44:44.348600 | orchestrator | Saturday 27 September 2025 21:42:21 +0000 (0:00:01.618) 0:00:26.604 **** 2025-09-27 21:44:44.348607 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-27 21:44:44.348615 | orchestrator | 2025-09-27 21:44:44.348622 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-09-27 21:44:44.348629 | orchestrator | Saturday 27 September 2025 21:42:22 +0000 (0:00:00.791) 0:00:27.396 **** 2025-09-27 21:44:44.348636 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:44:44.348644 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:44:44.348651 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:44:44.348658 | orchestrator | 2025-09-27 21:44:44.348665 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-09-27 21:44:44.348672 | orchestrator | Saturday 27 September 2025 21:42:23 +0000 (0:00:00.636) 0:00:28.032 **** 2025-09-27 21:44:44.348680 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-27 21:44:44.348687 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-27 21:44:44.348694 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-27 21:44:44.348702 | orchestrator | 2025-09-27 21:44:44.348709 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-09-27 21:44:44.348717 | orchestrator | Saturday 27 September 2025 21:42:24 +0000 (0:00:00.849) 0:00:28.881 **** 2025-09-27 21:44:44.348724 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:44:44.348731 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:44:44.348738 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:44:44.348745 | orchestrator | 2025-09-27 21:44:44.348753 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-09-27 21:44:44.348760 | orchestrator | Saturday 27 September 2025 21:42:24 +0000 (0:00:00.265) 0:00:29.147 **** 2025-09-27 21:44:44.348767 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-27 21:44:44.348775 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-27 21:44:44.348782 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-27 21:44:44.348790 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-27 21:44:44.348797 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-27 21:44:44.348804 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-27 21:44:44.348812 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-27 21:44:44.348819 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-27 21:44:44.348827 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-27 21:44:44.348834 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-27 21:44:44.348841 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-27 21:44:44.348849 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-27 21:44:44.348862 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-27 21:44:44.348869 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-27 21:44:44.348876 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-27 21:44:44.348888 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-27 21:44:44.348896 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-27 21:44:44.348903 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-27 21:44:44.348910 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-27 21:44:44.348918 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-27 21:44:44.348925 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-27 21:44:44.348932 | orchestrator | 2025-09-27 21:44:44.348940 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-09-27 21:44:44.348992 | orchestrator | Saturday 27 September 2025 21:42:33 +0000 (0:00:08.971) 0:00:38.118 **** 2025-09-27 21:44:44.349005 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-27 21:44:44.349012 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-27 21:44:44.349020 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-27 21:44:44.349027 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-27 21:44:44.349034 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-27 21:44:44.349042 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-27 21:44:44.349049 | orchestrator | 2025-09-27 21:44:44.349057 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-09-27 21:44:44.349064 | orchestrator | Saturday 27 September 2025 21:42:36 +0000 (0:00:03.153) 0:00:41.272 **** 2025-09-27 21:44:44.349072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 21:44:44.349081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 21:44:44.349100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-27 21:44:44.349113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-27 21:44:44.349122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-27 21:44:44.349129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-27 21:44:44.349137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-27 21:44:44.349144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-27 21:44:44.349157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-27 21:44:44.349164 | orchestrator | 2025-09-27 21:44:44.349172 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-27 21:44:44.349183 | orchestrator | Saturday 27 September 2025 21:42:38 +0000 (0:00:02.481) 0:00:43.754 **** 2025-09-27 21:44:44.349191 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:44:44.349198 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:44:44.349205 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:44:44.349213 | orchestrator | 2025-09-27 21:44:44.349220 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-09-27 21:44:44.349228 | orchestrator | Saturday 27 September 2025 21:42:39 +0000 (0:00:00.303) 0:00:44.057 **** 2025-09-27 21:44:44.349235 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:44:44.349242 | orchestrator | 2025-09-27 21:44:44.349249 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-09-27 21:44:44.349258 | orchestrator | Saturday 27 September 2025 21:42:41 +0000 (0:00:02.480) 0:00:46.538 **** 2025-09-27 21:44:44.349265 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:44:44.349272 | orchestrator | 2025-09-27 21:44:44.349279 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-09-27 21:44:44.349287 | orchestrator | Saturday 27 September 2025 21:42:44 +0000 (0:00:02.657) 0:00:49.195 **** 2025-09-27 21:44:44.349298 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:44:44.349306 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:44:44.349313 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:44:44.349320 | orchestrator | 2025-09-27 21:44:44.349328 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-09-27 21:44:44.349335 | orchestrator | Saturday 27 September 2025 21:42:45 +0000 (0:00:00.875) 0:00:50.071 **** 2025-09-27 21:44:44.349342 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:44:44.349349 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:44:44.349357 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:44:44.349364 | orchestrator | 2025-09-27 21:44:44.349371 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-09-27 21:44:44.349379 | orchestrator | Saturday 27 September 2025 21:42:45 +0000 (0:00:00.522) 0:00:50.594 **** 2025-09-27 21:44:44.349386 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:44:44.349393 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:44:44.349400 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:44:44.349408 | orchestrator | 2025-09-27 21:44:44.349415 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-09-27 21:44:44.349422 | orchestrator | Saturday 27 September 2025 21:42:46 +0000 (0:00:00.414) 0:00:51.008 **** 2025-09-27 21:44:44.349430 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:44:44.349437 | orchestrator | 2025-09-27 21:44:44.349444 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-09-27 21:44:44.349456 | orchestrator | Saturday 27 September 2025 21:43:01 +0000 (0:00:15.344) 0:01:06.353 **** 2025-09-27 21:44:44.349464 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:44:44.349471 | orchestrator | 2025-09-27 21:44:44.349478 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-27 21:44:44.349486 | orchestrator | Saturday 27 September 2025 21:43:13 +0000 (0:00:11.607) 0:01:17.960 **** 2025-09-27 21:44:44.349493 | orchestrator | 2025-09-27 21:44:44.349500 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-27 21:44:44.349508 | orchestrator | Saturday 27 September 2025 21:43:13 +0000 (0:00:00.062) 0:01:18.023 **** 2025-09-27 21:44:44.349515 | orchestrator | 2025-09-27 21:44:44.349522 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-27 21:44:44.349530 | orchestrator | Saturday 27 September 2025 21:43:13 +0000 (0:00:00.063) 0:01:18.086 **** 2025-09-27 21:44:44.349537 | orchestrator | 2025-09-27 21:44:44.349544 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-09-27 21:44:44.349551 | orchestrator | Saturday 27 September 2025 21:43:13 +0000 (0:00:00.068) 0:01:18.155 **** 2025-09-27 21:44:44.349558 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:44:44.349566 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:44:44.349573 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:44:44.349580 | orchestrator | 2025-09-27 21:44:44.349588 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-09-27 21:44:44.349595 | orchestrator | Saturday 27 September 2025 21:43:29 +0000 (0:00:15.719) 0:01:33.874 **** 2025-09-27 21:44:44.349602 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:44:44.349610 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:44:44.349617 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:44:44.349624 | orchestrator | 2025-09-27 21:44:44.349632 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-09-27 21:44:44.349639 | orchestrator | Saturday 27 September 2025 21:43:34 +0000 (0:00:05.781) 0:01:39.656 **** 2025-09-27 21:44:44.349647 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:44:44.349654 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:44:44.349661 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:44:44.349669 | orchestrator | 2025-09-27 21:44:44.349676 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-27 21:44:44.349683 | orchestrator | Saturday 27 September 2025 21:43:46 +0000 (0:00:11.953) 0:01:51.609 **** 2025-09-27 21:44:44.349691 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:44:44.349698 | orchestrator | 2025-09-27 21:44:44.349705 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-09-27 21:44:44.349713 | orchestrator | Saturday 27 September 2025 21:43:47 +0000 (0:00:00.822) 0:01:52.432 **** 2025-09-27 21:44:44.349720 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:44:44.349727 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:44:44.349734 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:44:44.349741 | orchestrator | 2025-09-27 21:44:44.349748 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-09-27 21:44:44.349756 | orchestrator | Saturday 27 September 2025 21:43:48 +0000 (0:00:00.866) 0:01:53.298 **** 2025-09-27 21:44:44.349763 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:44:44.349770 | orchestrator | 2025-09-27 21:44:44.349778 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-09-27 21:44:44.349785 | orchestrator | Saturday 27 September 2025 21:43:50 +0000 (0:00:01.916) 0:01:55.215 **** 2025-09-27 21:44:44.349793 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-09-27 21:44:44.349800 | orchestrator | 2025-09-27 21:44:44.349812 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-09-27 21:44:44.349819 | orchestrator | Saturday 27 September 2025 21:44:03 +0000 (0:00:12.943) 0:02:08.158 **** 2025-09-27 21:44:44.349827 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-09-27 21:44:44.349839 | orchestrator | 2025-09-27 21:44:44.349847 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-09-27 21:44:44.349854 | orchestrator | Saturday 27 September 2025 21:44:30 +0000 (0:00:26.908) 0:02:35.067 **** 2025-09-27 21:44:44.349861 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-09-27 21:44:44.349869 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-09-27 21:44:44.349876 | orchestrator | 2025-09-27 21:44:44.349883 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-09-27 21:44:44.349892 | orchestrator | Saturday 27 September 2025 21:44:37 +0000 (0:00:07.476) 0:02:42.544 **** 2025-09-27 21:44:44.349904 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:44:44.349911 | orchestrator | 2025-09-27 21:44:44.349918 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-09-27 21:44:44.349925 | orchestrator | Saturday 27 September 2025 21:44:37 +0000 (0:00:00.142) 0:02:42.686 **** 2025-09-27 21:44:44.349933 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:44:44.349940 | orchestrator | 2025-09-27 21:44:44.349960 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-09-27 21:44:44.349968 | orchestrator | Saturday 27 September 2025 21:44:38 +0000 (0:00:00.132) 0:02:42.818 **** 2025-09-27 21:44:44.349975 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:44:44.349982 | orchestrator | 2025-09-27 21:44:44.349990 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-09-27 21:44:44.349997 | orchestrator | Saturday 27 September 2025 21:44:38 +0000 (0:00:00.134) 0:02:42.952 **** 2025-09-27 21:44:44.350004 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:44:44.350012 | orchestrator | 2025-09-27 21:44:44.350056 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-09-27 21:44:44.350063 | orchestrator | Saturday 27 September 2025 21:44:38 +0000 (0:00:00.540) 0:02:43.493 **** 2025-09-27 21:44:44.350070 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:44:44.350077 | orchestrator | 2025-09-27 21:44:44.350085 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-27 21:44:44.350092 | orchestrator | Saturday 27 September 2025 21:44:42 +0000 (0:00:03.447) 0:02:46.941 **** 2025-09-27 21:44:44.350210 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:44:44.350219 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:44:44.350227 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:44:44.350234 | orchestrator | 2025-09-27 21:44:44.350241 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:44:44.350250 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-09-27 21:44:44.350259 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-27 21:44:44.350304 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-27 21:44:44.350314 | orchestrator | 2025-09-27 21:44:44.350321 | orchestrator | 2025-09-27 21:44:44.350328 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:44:44.350336 | orchestrator | Saturday 27 September 2025 21:44:42 +0000 (0:00:00.463) 0:02:47.404 **** 2025-09-27 21:44:44.350343 | orchestrator | =============================================================================== 2025-09-27 21:44:44.350351 | orchestrator | service-ks-register : keystone | Creating services --------------------- 26.91s 2025-09-27 21:44:44.350358 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 15.72s 2025-09-27 21:44:44.350365 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.34s 2025-09-27 21:44:44.350372 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.94s 2025-09-27 21:44:44.350388 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.95s 2025-09-27 21:44:44.350395 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.61s 2025-09-27 21:44:44.350402 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.97s 2025-09-27 21:44:44.350410 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.48s 2025-09-27 21:44:44.350417 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 5.78s 2025-09-27 21:44:44.350424 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.44s 2025-09-27 21:44:44.350431 | orchestrator | keystone : Creating default user role ----------------------------------- 3.45s 2025-09-27 21:44:44.350439 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.39s 2025-09-27 21:44:44.350446 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.27s 2025-09-27 21:44:44.350453 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.15s 2025-09-27 21:44:44.350461 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.66s 2025-09-27 21:44:44.350468 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.48s 2025-09-27 21:44:44.350475 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.48s 2025-09-27 21:44:44.350482 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.33s 2025-09-27 21:44:44.350496 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.96s 2025-09-27 21:44:44.350503 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.92s 2025-09-27 21:44:44.350510 | orchestrator | 2025-09-27 21:44:44 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:44:44.350518 | orchestrator | 2025-09-27 21:44:44 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:44:44.350525 | orchestrator | 2025-09-27 21:44:44 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:44:44.350533 | orchestrator | 2025-09-27 21:44:44 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:44:47.381044 | orchestrator | 2025-09-27 21:44:47 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:44:47.382674 | orchestrator | 2025-09-27 21:44:47 | INFO  | Task f01dfe7c-07f2-43c1-b432-7fabdce5ad8d is in state STARTED 2025-09-27 21:44:47.384298 | orchestrator | 2025-09-27 21:44:47 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:44:47.385455 | orchestrator | 2025-09-27 21:44:47 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:44:47.386499 | orchestrator | 2025-09-27 21:44:47 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:44:47.386534 | orchestrator | 2025-09-27 21:44:47 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:44:50.420850 | orchestrator | 2025-09-27 21:44:50 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:44:50.421016 | orchestrator | 2025-09-27 21:44:50 | INFO  | Task f01dfe7c-07f2-43c1-b432-7fabdce5ad8d is in state STARTED 2025-09-27 21:44:50.422235 | orchestrator | 2025-09-27 21:44:50 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:44:50.422888 | orchestrator | 2025-09-27 21:44:50 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:44:50.423747 | orchestrator | 2025-09-27 21:44:50 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:44:50.423827 | orchestrator | 2025-09-27 21:44:50 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:44:53.469316 | orchestrator | 2025-09-27 21:44:53 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:44:53.472055 | orchestrator | 2025-09-27 21:44:53 | INFO  | Task f01dfe7c-07f2-43c1-b432-7fabdce5ad8d is in state STARTED 2025-09-27 21:44:53.472848 | orchestrator | 2025-09-27 21:44:53 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:44:53.477200 | orchestrator | 2025-09-27 21:44:53 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:44:53.497258 | orchestrator | 2025-09-27 21:44:53 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:44:53.497345 | orchestrator | 2025-09-27 21:44:53 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:44:56.542380 | orchestrator | 2025-09-27 21:44:56 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:44:56.544025 | orchestrator | 2025-09-27 21:44:56 | INFO  | Task f01dfe7c-07f2-43c1-b432-7fabdce5ad8d is in state STARTED 2025-09-27 21:44:56.545749 | orchestrator | 2025-09-27 21:44:56 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:44:56.547314 | orchestrator | 2025-09-27 21:44:56 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:44:56.548755 | orchestrator | 2025-09-27 21:44:56 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:44:56.548783 | orchestrator | 2025-09-27 21:44:56 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:44:59.604234 | orchestrator | 2025-09-27 21:44:59 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:44:59.607172 | orchestrator | 2025-09-27 21:44:59 | INFO  | Task f01dfe7c-07f2-43c1-b432-7fabdce5ad8d is in state SUCCESS 2025-09-27 21:44:59.609174 | orchestrator | 2025-09-27 21:44:59 | INFO  | Task a3632846-a101-4543-8a5b-2d3c8d814470 is in state STARTED 2025-09-27 21:44:59.611124 | orchestrator | 2025-09-27 21:44:59 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:44:59.613136 | orchestrator | 2025-09-27 21:44:59 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:44:59.614703 | orchestrator | 2025-09-27 21:44:59 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:44:59.614924 | orchestrator | 2025-09-27 21:44:59 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:45:02.660210 | orchestrator | 2025-09-27 21:45:02 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:45:02.661197 | orchestrator | 2025-09-27 21:45:02 | INFO  | Task a3632846-a101-4543-8a5b-2d3c8d814470 is in state STARTED 2025-09-27 21:45:02.663186 | orchestrator | 2025-09-27 21:45:02 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:45:02.665379 | orchestrator | 2025-09-27 21:45:02 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:45:02.667024 | orchestrator | 2025-09-27 21:45:02 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:45:02.667076 | orchestrator | 2025-09-27 21:45:02 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:45:05.716562 | orchestrator | 2025-09-27 21:45:05 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:45:05.718717 | orchestrator | 2025-09-27 21:45:05 | INFO  | Task a3632846-a101-4543-8a5b-2d3c8d814470 is in state STARTED 2025-09-27 21:45:05.720879 | orchestrator | 2025-09-27 21:45:05 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:45:05.722728 | orchestrator | 2025-09-27 21:45:05 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:45:05.724874 | orchestrator | 2025-09-27 21:45:05 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:45:05.725048 | orchestrator | 2025-09-27 21:45:05 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:45:08.774080 | orchestrator | 2025-09-27 21:45:08 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:45:08.775917 | orchestrator | 2025-09-27 21:45:08 | INFO  | Task a3632846-a101-4543-8a5b-2d3c8d814470 is in state STARTED 2025-09-27 21:45:08.779139 | orchestrator | 2025-09-27 21:45:08 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:45:08.782106 | orchestrator | 2025-09-27 21:45:08 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:45:08.783908 | orchestrator | 2025-09-27 21:45:08 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:45:08.784277 | orchestrator | 2025-09-27 21:45:08 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:45:11.835440 | orchestrator | 2025-09-27 21:45:11 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:45:11.837156 | orchestrator | 2025-09-27 21:45:11 | INFO  | Task a3632846-a101-4543-8a5b-2d3c8d814470 is in state STARTED 2025-09-27 21:45:11.838350 | orchestrator | 2025-09-27 21:45:11 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:45:11.839824 | orchestrator | 2025-09-27 21:45:11 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:45:11.842240 | orchestrator | 2025-09-27 21:45:11 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:45:11.842525 | orchestrator | 2025-09-27 21:45:11 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:45:14.889484 | orchestrator | 2025-09-27 21:45:14 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:45:14.892023 | orchestrator | 2025-09-27 21:45:14 | INFO  | Task a3632846-a101-4543-8a5b-2d3c8d814470 is in state STARTED 2025-09-27 21:45:14.894593 | orchestrator | 2025-09-27 21:45:14 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:45:14.896875 | orchestrator | 2025-09-27 21:45:14 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:45:14.898783 | orchestrator | 2025-09-27 21:45:14 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:45:14.898827 | orchestrator | 2025-09-27 21:45:14 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:45:17.944414 | orchestrator | 2025-09-27 21:45:17 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:45:17.945913 | orchestrator | 2025-09-27 21:45:17 | INFO  | Task a3632846-a101-4543-8a5b-2d3c8d814470 is in state STARTED 2025-09-27 21:45:17.947650 | orchestrator | 2025-09-27 21:45:17 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:45:17.949223 | orchestrator | 2025-09-27 21:45:17 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:45:17.950534 | orchestrator | 2025-09-27 21:45:17 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:45:17.950894 | orchestrator | 2025-09-27 21:45:17 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:45:20.993097 | orchestrator | 2025-09-27 21:45:20 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:45:20.994247 | orchestrator | 2025-09-27 21:45:20 | INFO  | Task a3632846-a101-4543-8a5b-2d3c8d814470 is in state STARTED 2025-09-27 21:45:20.995221 | orchestrator | 2025-09-27 21:45:20 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:45:20.996371 | orchestrator | 2025-09-27 21:45:20 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:45:20.997578 | orchestrator | 2025-09-27 21:45:20 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:45:20.997611 | orchestrator | 2025-09-27 21:45:20 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:45:24.038902 | orchestrator | 2025-09-27 21:45:24 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:45:24.040294 | orchestrator | 2025-09-27 21:45:24 | INFO  | Task a3632846-a101-4543-8a5b-2d3c8d814470 is in state STARTED 2025-09-27 21:45:24.042331 | orchestrator | 2025-09-27 21:45:24 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:45:24.044542 | orchestrator | 2025-09-27 21:45:24 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:45:24.047751 | orchestrator | 2025-09-27 21:45:24 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:45:24.048367 | orchestrator | 2025-09-27 21:45:24 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:45:27.085862 | orchestrator | 2025-09-27 21:45:27 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:45:27.086012 | orchestrator | 2025-09-27 21:45:27 | INFO  | Task a3632846-a101-4543-8a5b-2d3c8d814470 is in state STARTED 2025-09-27 21:45:27.086951 | orchestrator | 2025-09-27 21:45:27 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:45:27.088185 | orchestrator | 2025-09-27 21:45:27 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:45:27.090876 | orchestrator | 2025-09-27 21:45:27 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:45:27.090905 | orchestrator | 2025-09-27 21:45:27 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:45:30.131697 | orchestrator | 2025-09-27 21:45:30 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:45:30.133972 | orchestrator | 2025-09-27 21:45:30 | INFO  | Task a3632846-a101-4543-8a5b-2d3c8d814470 is in state STARTED 2025-09-27 21:45:30.136308 | orchestrator | 2025-09-27 21:45:30 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:45:30.138354 | orchestrator | 2025-09-27 21:45:30 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:45:30.140139 | orchestrator | 2025-09-27 21:45:30 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:45:30.140249 | orchestrator | 2025-09-27 21:45:30 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:45:33.183072 | orchestrator | 2025-09-27 21:45:33 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:45:33.183205 | orchestrator | 2025-09-27 21:45:33 | INFO  | Task a3632846-a101-4543-8a5b-2d3c8d814470 is in state STARTED 2025-09-27 21:45:33.184063 | orchestrator | 2025-09-27 21:45:33 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:45:33.185727 | orchestrator | 2025-09-27 21:45:33 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:45:33.186983 | orchestrator | 2025-09-27 21:45:33 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:45:33.187229 | orchestrator | 2025-09-27 21:45:33 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:45:36.227067 | orchestrator | 2025-09-27 21:45:36 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:45:36.227296 | orchestrator | 2025-09-27 21:45:36 | INFO  | Task a3632846-a101-4543-8a5b-2d3c8d814470 is in state STARTED 2025-09-27 21:45:36.227322 | orchestrator | 2025-09-27 21:45:36 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:45:36.227349 | orchestrator | 2025-09-27 21:45:36 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:45:36.227537 | orchestrator | 2025-09-27 21:45:36 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:45:36.227560 | orchestrator | 2025-09-27 21:45:36 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:45:39.259018 | orchestrator | 2025-09-27 21:45:39 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:45:39.259289 | orchestrator | 2025-09-27 21:45:39 | INFO  | Task a3632846-a101-4543-8a5b-2d3c8d814470 is in state STARTED 2025-09-27 21:45:39.259349 | orchestrator | 2025-09-27 21:45:39 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:45:39.260202 | orchestrator | 2025-09-27 21:45:39 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:45:39.261196 | orchestrator | 2025-09-27 21:45:39 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:45:39.261225 | orchestrator | 2025-09-27 21:45:39 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:45:42.313153 | orchestrator | 2025-09-27 21:45:42 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:45:42.313284 | orchestrator | 2025-09-27 21:45:42 | INFO  | Task a3632846-a101-4543-8a5b-2d3c8d814470 is in state STARTED 2025-09-27 21:45:42.313866 | orchestrator | 2025-09-27 21:45:42 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:45:42.314504 | orchestrator | 2025-09-27 21:45:42 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:45:42.315129 | orchestrator | 2025-09-27 21:45:42 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:45:42.315265 | orchestrator | 2025-09-27 21:45:42 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:45:45.344812 | orchestrator | 2025-09-27 21:45:45 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:45:45.344994 | orchestrator | 2025-09-27 21:45:45 | INFO  | Task a3632846-a101-4543-8a5b-2d3c8d814470 is in state STARTED 2025-09-27 21:45:45.345573 | orchestrator | 2025-09-27 21:45:45 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:45:45.346063 | orchestrator | 2025-09-27 21:45:45 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:45:45.346735 | orchestrator | 2025-09-27 21:45:45 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:45:45.346761 | orchestrator | 2025-09-27 21:45:45 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:45:48.377701 | orchestrator | 2025-09-27 21:45:48 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:45:48.378114 | orchestrator | 2025-09-27 21:45:48 | INFO  | Task a3632846-a101-4543-8a5b-2d3c8d814470 is in state STARTED 2025-09-27 21:45:48.379484 | orchestrator | 2025-09-27 21:45:48 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:45:48.381639 | orchestrator | 2025-09-27 21:45:48 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:45:48.382161 | orchestrator | 2025-09-27 21:45:48 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:45:48.382201 | orchestrator | 2025-09-27 21:45:48 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:45:51.405454 | orchestrator | 2025-09-27 21:45:51 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:45:51.405728 | orchestrator | 2025-09-27 21:45:51 | INFO  | Task a3632846-a101-4543-8a5b-2d3c8d814470 is in state STARTED 2025-09-27 21:45:51.406391 | orchestrator | 2025-09-27 21:45:51 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:45:51.407834 | orchestrator | 2025-09-27 21:45:51 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:45:51.408358 | orchestrator | 2025-09-27 21:45:51 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:45:51.408380 | orchestrator | 2025-09-27 21:45:51 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:45:54.435856 | orchestrator | 2025-09-27 21:45:54 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:45:54.436502 | orchestrator | 2025-09-27 21:45:54 | INFO  | Task a3632846-a101-4543-8a5b-2d3c8d814470 is in state STARTED 2025-09-27 21:45:54.438479 | orchestrator | 2025-09-27 21:45:54 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:45:54.438506 | orchestrator | 2025-09-27 21:45:54 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:45:54.439583 | orchestrator | 2025-09-27 21:45:54 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:45:54.439662 | orchestrator | 2025-09-27 21:45:54 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:45:57.479655 | orchestrator | 2025-09-27 21:45:57 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:45:57.479938 | orchestrator | 2025-09-27 21:45:57 | INFO  | Task a3632846-a101-4543-8a5b-2d3c8d814470 is in state STARTED 2025-09-27 21:45:57.480666 | orchestrator | 2025-09-27 21:45:57 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:45:57.481353 | orchestrator | 2025-09-27 21:45:57 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:45:57.482078 | orchestrator | 2025-09-27 21:45:57 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:45:57.482112 | orchestrator | 2025-09-27 21:45:57 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:46:00.510254 | orchestrator | 2025-09-27 21:46:00 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:46:00.510582 | orchestrator | 2025-09-27 21:46:00 | INFO  | Task a3632846-a101-4543-8a5b-2d3c8d814470 is in state STARTED 2025-09-27 21:46:00.511431 | orchestrator | 2025-09-27 21:46:00 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:46:00.512075 | orchestrator | 2025-09-27 21:46:00 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:46:00.512813 | orchestrator | 2025-09-27 21:46:00 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:46:00.513217 | orchestrator | 2025-09-27 21:46:00 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:46:03.547049 | orchestrator | 2025-09-27 21:46:03 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:46:03.547210 | orchestrator | 2025-09-27 21:46:03 | INFO  | Task a3632846-a101-4543-8a5b-2d3c8d814470 is in state STARTED 2025-09-27 21:46:03.547813 | orchestrator | 2025-09-27 21:46:03 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:46:03.548433 | orchestrator | 2025-09-27 21:46:03 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:46:03.549088 | orchestrator | 2025-09-27 21:46:03 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:46:03.549238 | orchestrator | 2025-09-27 21:46:03 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:46:06.576394 | orchestrator | 2025-09-27 21:46:06 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:46:06.576516 | orchestrator | 2025-09-27 21:46:06 | INFO  | Task a3632846-a101-4543-8a5b-2d3c8d814470 is in state STARTED 2025-09-27 21:46:06.577048 | orchestrator | 2025-09-27 21:46:06 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:46:06.577617 | orchestrator | 2025-09-27 21:46:06 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:46:06.579463 | orchestrator | 2025-09-27 21:46:06 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:46:06.579509 | orchestrator | 2025-09-27 21:46:06 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:46:09.626165 | orchestrator | 2025-09-27 21:46:09 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:46:09.631939 | orchestrator | 2025-09-27 21:46:09 | INFO  | Task a3632846-a101-4543-8a5b-2d3c8d814470 is in state STARTED 2025-09-27 21:46:09.635118 | orchestrator | 2025-09-27 21:46:09 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:46:09.638083 | orchestrator | 2025-09-27 21:46:09 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:46:09.638945 | orchestrator | 2025-09-27 21:46:09 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:46:09.639021 | orchestrator | 2025-09-27 21:46:09 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:46:12.669012 | orchestrator | 2025-09-27 21:46:12 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:46:12.669437 | orchestrator | 2025-09-27 21:46:12 | INFO  | Task a3632846-a101-4543-8a5b-2d3c8d814470 is in state STARTED 2025-09-27 21:46:12.670546 | orchestrator | 2025-09-27 21:46:12 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:46:12.671495 | orchestrator | 2025-09-27 21:46:12 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:46:12.672673 | orchestrator | 2025-09-27 21:46:12 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:46:12.672776 | orchestrator | 2025-09-27 21:46:12 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:46:15.702501 | orchestrator | 2025-09-27 21:46:15 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:46:15.703854 | orchestrator | 2025-09-27 21:46:15 | INFO  | Task a3632846-a101-4543-8a5b-2d3c8d814470 is in state STARTED 2025-09-27 21:46:15.704562 | orchestrator | 2025-09-27 21:46:15 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:46:15.705283 | orchestrator | 2025-09-27 21:46:15 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:46:15.706210 | orchestrator | 2025-09-27 21:46:15 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:46:15.706283 | orchestrator | 2025-09-27 21:46:15 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:46:18.741588 | orchestrator | 2025-09-27 21:46:18 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:46:18.744061 | orchestrator | 2025-09-27 21:46:18 | INFO  | Task a3632846-a101-4543-8a5b-2d3c8d814470 is in state STARTED 2025-09-27 21:46:18.746323 | orchestrator | 2025-09-27 21:46:18 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:46:18.748465 | orchestrator | 2025-09-27 21:46:18 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:46:18.749922 | orchestrator | 2025-09-27 21:46:18 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:46:18.750276 | orchestrator | 2025-09-27 21:46:18 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:46:21.785309 | orchestrator | 2025-09-27 21:46:21 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:46:21.785428 | orchestrator | 2025-09-27 21:46:21 | INFO  | Task a3632846-a101-4543-8a5b-2d3c8d814470 is in state STARTED 2025-09-27 21:46:21.787246 | orchestrator | 2025-09-27 21:46:21 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:46:21.789621 | orchestrator | 2025-09-27 21:46:21 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:46:21.790786 | orchestrator | 2025-09-27 21:46:21 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:46:21.790832 | orchestrator | 2025-09-27 21:46:21 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:46:24.827500 | orchestrator | 2025-09-27 21:46:24 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:46:24.832232 | orchestrator | 2025-09-27 21:46:24 | INFO  | Task a3632846-a101-4543-8a5b-2d3c8d814470 is in state STARTED 2025-09-27 21:46:24.834202 | orchestrator | 2025-09-27 21:46:24 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:46:24.836291 | orchestrator | 2025-09-27 21:46:24 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:46:24.837987 | orchestrator | 2025-09-27 21:46:24 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:46:24.838068 | orchestrator | 2025-09-27 21:46:24 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:46:27.878302 | orchestrator | 2025-09-27 21:46:27 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:46:27.878536 | orchestrator | 2025-09-27 21:46:27 | INFO  | Task a3632846-a101-4543-8a5b-2d3c8d814470 is in state STARTED 2025-09-27 21:46:27.879414 | orchestrator | 2025-09-27 21:46:27 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:46:27.880149 | orchestrator | 2025-09-27 21:46:27 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:46:27.881093 | orchestrator | 2025-09-27 21:46:27 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:46:27.881135 | orchestrator | 2025-09-27 21:46:27 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:46:30.912393 | orchestrator | 2025-09-27 21:46:30 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:46:30.912635 | orchestrator | 2025-09-27 21:46:30 | INFO  | Task a3632846-a101-4543-8a5b-2d3c8d814470 is in state STARTED 2025-09-27 21:46:30.913252 | orchestrator | 2025-09-27 21:46:30 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:46:30.913824 | orchestrator | 2025-09-27 21:46:30 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:46:30.914427 | orchestrator | 2025-09-27 21:46:30 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:46:30.914474 | orchestrator | 2025-09-27 21:46:30 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:46:33.951172 | orchestrator | 2025-09-27 21:46:33 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:46:33.952891 | orchestrator | 2025-09-27 21:46:33 | INFO  | Task a3632846-a101-4543-8a5b-2d3c8d814470 is in state STARTED 2025-09-27 21:46:33.954395 | orchestrator | 2025-09-27 21:46:33 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:46:33.956035 | orchestrator | 2025-09-27 21:46:33 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:46:33.957486 | orchestrator | 2025-09-27 21:46:33 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:46:33.957552 | orchestrator | 2025-09-27 21:46:33 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:46:36.985614 | orchestrator | 2025-09-27 21:46:36 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:46:36.985779 | orchestrator | 2025-09-27 21:46:36 | INFO  | Task a3632846-a101-4543-8a5b-2d3c8d814470 is in state SUCCESS 2025-09-27 21:46:36.986190 | orchestrator | 2025-09-27 21:46:36.986220 | orchestrator | 2025-09-27 21:46:36.986304 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-09-27 21:46:36.986318 | orchestrator | 2025-09-27 21:46:36.986330 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-09-27 21:46:36.986342 | orchestrator | Saturday 27 September 2025 21:44:00 +0000 (0:00:00.238) 0:00:00.238 **** 2025-09-27 21:46:36.986354 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-09-27 21:46:36.986367 | orchestrator | 2025-09-27 21:46:36.986379 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-09-27 21:46:36.986390 | orchestrator | Saturday 27 September 2025 21:44:00 +0000 (0:00:00.259) 0:00:00.497 **** 2025-09-27 21:46:36.986402 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-09-27 21:46:36.986413 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-09-27 21:46:36.986425 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-09-27 21:46:36.986436 | orchestrator | 2025-09-27 21:46:36.986447 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-09-27 21:46:36.986458 | orchestrator | Saturday 27 September 2025 21:44:02 +0000 (0:00:01.370) 0:00:01.868 **** 2025-09-27 21:46:36.986469 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-09-27 21:46:36.986480 | orchestrator | 2025-09-27 21:46:36.986491 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-09-27 21:46:36.986503 | orchestrator | Saturday 27 September 2025 21:44:03 +0000 (0:00:01.169) 0:00:03.038 **** 2025-09-27 21:46:36.986514 | orchestrator | changed: [testbed-manager] 2025-09-27 21:46:36.986525 | orchestrator | 2025-09-27 21:46:36.986536 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-09-27 21:46:36.986547 | orchestrator | Saturday 27 September 2025 21:44:04 +0000 (0:00:01.096) 0:00:04.134 **** 2025-09-27 21:46:36.986558 | orchestrator | changed: [testbed-manager] 2025-09-27 21:46:36.986572 | orchestrator | 2025-09-27 21:46:36.986590 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-09-27 21:46:36.986608 | orchestrator | Saturday 27 September 2025 21:44:05 +0000 (0:00:01.009) 0:00:05.144 **** 2025-09-27 21:46:36.986624 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-09-27 21:46:36.986641 | orchestrator | ok: [testbed-manager] 2025-09-27 21:46:36.986659 | orchestrator | 2025-09-27 21:46:36.986679 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-09-27 21:46:36.986698 | orchestrator | Saturday 27 September 2025 21:44:47 +0000 (0:00:42.421) 0:00:47.565 **** 2025-09-27 21:46:36.986757 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-09-27 21:46:36.986779 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-09-27 21:46:36.986797 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-09-27 21:46:36.986818 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-09-27 21:46:36.986840 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-09-27 21:46:36.986854 | orchestrator | 2025-09-27 21:46:36.986932 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-09-27 21:46:36.986945 | orchestrator | Saturday 27 September 2025 21:44:51 +0000 (0:00:03.334) 0:00:50.900 **** 2025-09-27 21:46:36.986958 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-09-27 21:46:36.986970 | orchestrator | 2025-09-27 21:46:36.986982 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-09-27 21:46:36.986994 | orchestrator | Saturday 27 September 2025 21:44:51 +0000 (0:00:00.400) 0:00:51.301 **** 2025-09-27 21:46:36.987006 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:46:36.987018 | orchestrator | 2025-09-27 21:46:36.987030 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-09-27 21:46:36.987042 | orchestrator | Saturday 27 September 2025 21:44:51 +0000 (0:00:00.121) 0:00:51.422 **** 2025-09-27 21:46:36.987057 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:46:36.987075 | orchestrator | 2025-09-27 21:46:36.987094 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-09-27 21:46:36.987112 | orchestrator | Saturday 27 September 2025 21:44:51 +0000 (0:00:00.289) 0:00:51.712 **** 2025-09-27 21:46:36.987130 | orchestrator | changed: [testbed-manager] 2025-09-27 21:46:36.987149 | orchestrator | 2025-09-27 21:46:36.987167 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-09-27 21:46:36.987185 | orchestrator | Saturday 27 September 2025 21:44:53 +0000 (0:00:01.828) 0:00:53.541 **** 2025-09-27 21:46:36.987202 | orchestrator | changed: [testbed-manager] 2025-09-27 21:46:36.987218 | orchestrator | 2025-09-27 21:46:36.987235 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-09-27 21:46:36.987253 | orchestrator | Saturday 27 September 2025 21:44:54 +0000 (0:00:00.736) 0:00:54.277 **** 2025-09-27 21:46:36.987292 | orchestrator | changed: [testbed-manager] 2025-09-27 21:46:36.987313 | orchestrator | 2025-09-27 21:46:36.987333 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-09-27 21:46:36.987353 | orchestrator | Saturday 27 September 2025 21:44:55 +0000 (0:00:00.638) 0:00:54.915 **** 2025-09-27 21:46:36.987371 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-09-27 21:46:36.987391 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-09-27 21:46:36.987402 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-09-27 21:46:36.987413 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-09-27 21:46:36.987423 | orchestrator | 2025-09-27 21:46:36.987434 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:46:36.987446 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-27 21:46:36.987458 | orchestrator | 2025-09-27 21:46:36.987469 | orchestrator | 2025-09-27 21:46:36.987500 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:46:36.987512 | orchestrator | Saturday 27 September 2025 21:44:56 +0000 (0:00:01.472) 0:00:56.388 **** 2025-09-27 21:46:36.987522 | orchestrator | =============================================================================== 2025-09-27 21:46:36.987533 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 42.42s 2025-09-27 21:46:36.987544 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.33s 2025-09-27 21:46:36.987555 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.83s 2025-09-27 21:46:36.987565 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.47s 2025-09-27 21:46:36.987590 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.37s 2025-09-27 21:46:36.987601 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.17s 2025-09-27 21:46:36.987612 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.10s 2025-09-27 21:46:36.987623 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.01s 2025-09-27 21:46:36.987679 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.74s 2025-09-27 21:46:36.987692 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.64s 2025-09-27 21:46:36.987703 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.40s 2025-09-27 21:46:36.987714 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.29s 2025-09-27 21:46:36.987724 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.26s 2025-09-27 21:46:36.987735 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.12s 2025-09-27 21:46:36.987746 | orchestrator | 2025-09-27 21:46:36.987757 | orchestrator | 2025-09-27 21:46:36.987767 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-09-27 21:46:36.987778 | orchestrator | 2025-09-27 21:46:36.987789 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-09-27 21:46:36.987800 | orchestrator | Saturday 27 September 2025 21:45:00 +0000 (0:00:00.270) 0:00:00.270 **** 2025-09-27 21:46:36.987810 | orchestrator | changed: [testbed-manager] 2025-09-27 21:46:36.987821 | orchestrator | 2025-09-27 21:46:36.987862 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-09-27 21:46:36.987904 | orchestrator | Saturday 27 September 2025 21:45:02 +0000 (0:00:01.525) 0:00:01.796 **** 2025-09-27 21:46:36.987916 | orchestrator | changed: [testbed-manager] 2025-09-27 21:46:36.987927 | orchestrator | 2025-09-27 21:46:36.987938 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-09-27 21:46:36.987950 | orchestrator | Saturday 27 September 2025 21:45:03 +0000 (0:00:01.013) 0:00:02.809 **** 2025-09-27 21:46:36.987961 | orchestrator | changed: [testbed-manager] 2025-09-27 21:46:36.987971 | orchestrator | 2025-09-27 21:46:36.987982 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-09-27 21:46:36.987993 | orchestrator | Saturday 27 September 2025 21:45:04 +0000 (0:00:01.021) 0:00:03.831 **** 2025-09-27 21:46:36.988004 | orchestrator | changed: [testbed-manager] 2025-09-27 21:46:36.988015 | orchestrator | 2025-09-27 21:46:36.988026 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-09-27 21:46:36.988037 | orchestrator | Saturday 27 September 2025 21:45:05 +0000 (0:00:01.233) 0:00:05.064 **** 2025-09-27 21:46:36.988048 | orchestrator | changed: [testbed-manager] 2025-09-27 21:46:36.988058 | orchestrator | 2025-09-27 21:46:36.988069 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-09-27 21:46:36.988080 | orchestrator | Saturday 27 September 2025 21:45:06 +0000 (0:00:01.040) 0:00:06.105 **** 2025-09-27 21:46:36.988091 | orchestrator | changed: [testbed-manager] 2025-09-27 21:46:36.988102 | orchestrator | 2025-09-27 21:46:36.988113 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-09-27 21:46:36.988124 | orchestrator | Saturday 27 September 2025 21:45:07 +0000 (0:00:01.058) 0:00:07.163 **** 2025-09-27 21:46:36.988134 | orchestrator | changed: [testbed-manager] 2025-09-27 21:46:36.988145 | orchestrator | 2025-09-27 21:46:36.988156 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-09-27 21:46:36.988167 | orchestrator | Saturday 27 September 2025 21:45:09 +0000 (0:00:02.089) 0:00:09.253 **** 2025-09-27 21:46:36.988178 | orchestrator | changed: [testbed-manager] 2025-09-27 21:46:36.988188 | orchestrator | 2025-09-27 21:46:36.988199 | orchestrator | TASK [Create admin user] ******************************************************* 2025-09-27 21:46:36.988210 | orchestrator | Saturday 27 September 2025 21:45:11 +0000 (0:00:01.274) 0:00:10.527 **** 2025-09-27 21:46:36.988230 | orchestrator | changed: [testbed-manager] 2025-09-27 21:46:36.988241 | orchestrator | 2025-09-27 21:46:36.988252 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-09-27 21:46:36.988263 | orchestrator | Saturday 27 September 2025 21:46:09 +0000 (0:00:58.132) 0:01:08.660 **** 2025-09-27 21:46:36.988274 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:46:36.988284 | orchestrator | 2025-09-27 21:46:36.988303 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-27 21:46:36.988321 | orchestrator | 2025-09-27 21:46:36.988338 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-27 21:46:36.988355 | orchestrator | Saturday 27 September 2025 21:46:09 +0000 (0:00:00.134) 0:01:08.795 **** 2025-09-27 21:46:36.988373 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:46:36.988391 | orchestrator | 2025-09-27 21:46:36.988411 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-27 21:46:36.988430 | orchestrator | 2025-09-27 21:46:36.988448 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-27 21:46:36.988466 | orchestrator | Saturday 27 September 2025 21:46:11 +0000 (0:00:01.681) 0:01:10.477 **** 2025-09-27 21:46:36.988477 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:46:36.988488 | orchestrator | 2025-09-27 21:46:36.988499 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-27 21:46:36.988509 | orchestrator | 2025-09-27 21:46:36.988531 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-27 21:46:36.988542 | orchestrator | Saturday 27 September 2025 21:46:22 +0000 (0:00:11.481) 0:01:21.958 **** 2025-09-27 21:46:36.988553 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:46:36.988564 | orchestrator | 2025-09-27 21:46:36.988575 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:46:36.988586 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-27 21:46:36.988597 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:46:36.988609 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:46:36.988620 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:46:36.988631 | orchestrator | 2025-09-27 21:46:36.988641 | orchestrator | 2025-09-27 21:46:36.988652 | orchestrator | 2025-09-27 21:46:36.988663 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:46:36.988673 | orchestrator | Saturday 27 September 2025 21:46:34 +0000 (0:00:11.491) 0:01:33.449 **** 2025-09-27 21:46:36.988684 | orchestrator | =============================================================================== 2025-09-27 21:46:36.988695 | orchestrator | Create admin user ------------------------------------------------------ 58.13s 2025-09-27 21:46:36.988706 | orchestrator | Restart ceph manager service ------------------------------------------- 24.65s 2025-09-27 21:46:36.988716 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.09s 2025-09-27 21:46:36.988727 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.53s 2025-09-27 21:46:36.988738 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.27s 2025-09-27 21:46:36.988748 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.23s 2025-09-27 21:46:36.988759 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.06s 2025-09-27 21:46:36.988770 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.04s 2025-09-27 21:46:36.988781 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.02s 2025-09-27 21:46:36.988791 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.01s 2025-09-27 21:46:36.988811 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.13s 2025-09-27 21:46:36.988986 | orchestrator | 2025-09-27 21:46:36 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:46:36.989003 | orchestrator | 2025-09-27 21:46:36 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:46:36.989014 | orchestrator | 2025-09-27 21:46:36 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:46:36.989025 | orchestrator | 2025-09-27 21:46:36 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:46:40.037930 | orchestrator | 2025-09-27 21:46:40 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:46:40.038181 | orchestrator | 2025-09-27 21:46:40 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:46:40.038713 | orchestrator | 2025-09-27 21:46:40 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:46:40.039354 | orchestrator | 2025-09-27 21:46:40 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:46:40.039377 | orchestrator | 2025-09-27 21:46:40 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:46:43.070325 | orchestrator | 2025-09-27 21:46:43 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:46:43.070427 | orchestrator | 2025-09-27 21:46:43 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:46:43.070961 | orchestrator | 2025-09-27 21:46:43 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:46:43.071544 | orchestrator | 2025-09-27 21:46:43 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:46:43.071567 | orchestrator | 2025-09-27 21:46:43 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:46:46.102174 | orchestrator | 2025-09-27 21:46:46 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:46:46.102354 | orchestrator | 2025-09-27 21:46:46 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state STARTED 2025-09-27 21:46:46.103610 | orchestrator | 2025-09-27 21:46:46 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:46:46.104347 | orchestrator | 2025-09-27 21:46:46 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:46:46.104376 | orchestrator | 2025-09-27 21:46:46 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:46:49.131626 | orchestrator | 2025-09-27 21:46:49 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:46:49.132635 | orchestrator | 2025-09-27 21:46:49 | INFO  | Task 781733b1-e026-4399-917c-5aa245b71f7d is in state SUCCESS 2025-09-27 21:46:49.136146 | orchestrator | 2025-09-27 21:46:49.136204 | orchestrator | 2025-09-27 21:46:49.136217 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 21:46:49.136229 | orchestrator | 2025-09-27 21:46:49.136239 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 21:46:49.136249 | orchestrator | Saturday 27 September 2025 21:44:47 +0000 (0:00:00.406) 0:00:00.406 **** 2025-09-27 21:46:49.136259 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:46:49.136270 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:46:49.136280 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:46:49.136290 | orchestrator | 2025-09-27 21:46:49.136300 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 21:46:49.136310 | orchestrator | Saturday 27 September 2025 21:44:47 +0000 (0:00:00.356) 0:00:00.762 **** 2025-09-27 21:46:49.136320 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-09-27 21:46:49.136357 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-09-27 21:46:49.136368 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-09-27 21:46:49.136377 | orchestrator | 2025-09-27 21:46:49.136387 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-09-27 21:46:49.136397 | orchestrator | 2025-09-27 21:46:49.136407 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-27 21:46:49.136417 | orchestrator | Saturday 27 September 2025 21:44:48 +0000 (0:00:00.707) 0:00:01.470 **** 2025-09-27 21:46:49.136427 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:46:49.136438 | orchestrator | 2025-09-27 21:46:49.136448 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-09-27 21:46:49.136458 | orchestrator | Saturday 27 September 2025 21:44:49 +0000 (0:00:00.660) 0:00:02.131 **** 2025-09-27 21:46:49.136469 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-09-27 21:46:49.136479 | orchestrator | 2025-09-27 21:46:49.136489 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-09-27 21:46:49.136498 | orchestrator | Saturday 27 September 2025 21:44:53 +0000 (0:00:04.005) 0:00:06.136 **** 2025-09-27 21:46:49.136508 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-09-27 21:46:49.136518 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-09-27 21:46:49.136528 | orchestrator | 2025-09-27 21:46:49.137051 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-09-27 21:46:49.137071 | orchestrator | Saturday 27 September 2025 21:45:01 +0000 (0:00:08.115) 0:00:14.252 **** 2025-09-27 21:46:49.137082 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-27 21:46:49.137095 | orchestrator | 2025-09-27 21:46:49.137104 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-09-27 21:46:49.137114 | orchestrator | Saturday 27 September 2025 21:45:04 +0000 (0:00:03.702) 0:00:17.954 **** 2025-09-27 21:46:49.137124 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-27 21:46:49.137242 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-09-27 21:46:49.137257 | orchestrator | 2025-09-27 21:46:49.137267 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-09-27 21:46:49.137277 | orchestrator | Saturday 27 September 2025 21:45:09 +0000 (0:00:04.200) 0:00:22.154 **** 2025-09-27 21:46:49.137286 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-27 21:46:49.137296 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-09-27 21:46:49.137306 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-09-27 21:46:49.137315 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-09-27 21:46:49.137325 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-09-27 21:46:49.137335 | orchestrator | 2025-09-27 21:46:49.137344 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-09-27 21:46:49.137353 | orchestrator | Saturday 27 September 2025 21:45:26 +0000 (0:00:17.409) 0:00:39.563 **** 2025-09-27 21:46:49.137363 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-09-27 21:46:49.137372 | orchestrator | 2025-09-27 21:46:49.137382 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-09-27 21:46:49.137391 | orchestrator | Saturday 27 September 2025 21:45:31 +0000 (0:00:04.839) 0:00:44.403 **** 2025-09-27 21:46:49.137420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-27 21:46:49.137461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-27 21:46:49.137474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-27 21:46:49.137485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-27 21:46:49.137495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-27 21:46:49.137511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-27 21:46:49.137536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-27 21:46:49.137547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-27 21:46:49.137558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-27 21:46:49.137568 | orchestrator | 2025-09-27 21:46:49.137578 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-09-27 21:46:49.137588 | orchestrator | Saturday 27 September 2025 21:45:34 +0000 (0:00:02.990) 0:00:47.393 **** 2025-09-27 21:46:49.137598 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-09-27 21:46:49.137607 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-09-27 21:46:49.137616 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-09-27 21:46:49.137626 | orchestrator | 2025-09-27 21:46:49.137635 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-09-27 21:46:49.137645 | orchestrator | Saturday 27 September 2025 21:45:35 +0000 (0:00:01.697) 0:00:49.090 **** 2025-09-27 21:46:49.137654 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:46:49.137664 | orchestrator | 2025-09-27 21:46:49.137674 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-09-27 21:46:49.137684 | orchestrator | Saturday 27 September 2025 21:45:36 +0000 (0:00:00.132) 0:00:49.223 **** 2025-09-27 21:46:49.137693 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:46:49.137703 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:46:49.137712 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:46:49.137722 | orchestrator | 2025-09-27 21:46:49.137731 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-27 21:46:49.137741 | orchestrator | Saturday 27 September 2025 21:45:36 +0000 (0:00:00.520) 0:00:49.743 **** 2025-09-27 21:46:49.137751 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:46:49.137760 | orchestrator | 2025-09-27 21:46:49.137770 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-09-27 21:46:49.137786 | orchestrator | Saturday 27 September 2025 21:45:37 +0000 (0:00:00.501) 0:00:50.244 **** 2025-09-27 21:46:49.137801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-27 21:46:49.137819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-27 21:46:49.137830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-27 21:46:49.137840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-27 21:46:49.137852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-27 21:46:49.137915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-27 21:46:49.137928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-27 21:46:49.137986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-27 21:46:49.137999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-27 21:46:49.138011 | orchestrator | 2025-09-27 21:46:49.138107 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-09-27 21:46:49.138119 | orchestrator | Saturday 27 September 2025 21:45:40 +0000 (0:00:03.746) 0:00:53.991 **** 2025-09-27 21:46:49.138131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-27 21:46:49.138142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-27 21:46:49.138168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-27 21:46:49.138179 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:46:49.138199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-27 21:46:49.138210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-27 21:46:49.138222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-27 21:46:49.138233 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:46:49.138244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-27 21:46:49.138261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-27 21:46:49.138276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-27 21:46:49.138286 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:46:49.138296 | orchestrator | 2025-09-27 21:46:49.138306 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-09-27 21:46:49.138315 | orchestrator | Saturday 27 September 2025 21:45:42 +0000 (0:00:01.760) 0:00:55.752 **** 2025-09-27 21:46:49.138332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-27 21:46:49.138343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-27 21:46:49.138353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-27 21:46:49.138369 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:46:49.138379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-27 21:46:49.138399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-27 21:46:49.138410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-27 21:46:49.138419 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:46:49.138436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-27 21:46:49.138446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-27 21:46:49.138462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-27 21:46:49.138472 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:46:49.138482 | orchestrator | 2025-09-27 21:46:49.138491 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-09-27 21:46:49.138501 | orchestrator | Saturday 27 September 2025 21:45:43 +0000 (0:00:00.714) 0:00:56.466 **** 2025-09-27 21:46:49.138515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-27 21:46:49.138531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-27 21:46:49.138542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-27 21:46:49.138552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-27 21:46:49.138570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-27 21:46:49.138580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-27 21:46:49.138594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-27 21:46:49.138609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-27 21:46:49.138620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-27 21:46:49.138630 | orchestrator | 2025-09-27 21:46:49.138639 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-09-27 21:46:49.138649 | orchestrator | Saturday 27 September 2025 21:45:47 +0000 (0:00:04.112) 0:01:00.579 **** 2025-09-27 21:46:49.138659 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:46:49.138669 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:46:49.138678 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:46:49.138695 | orchestrator | 2025-09-27 21:46:49.138705 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-09-27 21:46:49.138714 | orchestrator | Saturday 27 September 2025 21:45:50 +0000 (0:00:03.122) 0:01:03.702 **** 2025-09-27 21:46:49.138724 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-27 21:46:49.138733 | orchestrator | 2025-09-27 21:46:49.138743 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-09-27 21:46:49.138752 | orchestrator | Saturday 27 September 2025 21:45:52 +0000 (0:00:01.890) 0:01:05.592 **** 2025-09-27 21:46:49.138762 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:46:49.138771 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:46:49.138781 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:46:49.138790 | orchestrator | 2025-09-27 21:46:49.138800 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-09-27 21:46:49.138810 | orchestrator | Saturday 27 September 2025 21:45:53 +0000 (0:00:00.825) 0:01:06.418 **** 2025-09-27 21:46:49.138819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-27 21:46:49.138834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-27 21:46:49.138851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-27 21:46:49.138917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-27 21:46:49.138937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-27 21:46:49.138947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-27 21:46:49.138957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-27 21:46:49.138973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-27 21:46:49.138983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-27 21:46:49.138993 | orchestrator | 2025-09-27 21:46:49.139002 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-09-27 21:46:49.139012 | orchestrator | Saturday 27 September 2025 21:46:02 +0000 (0:00:09.647) 0:01:16.066 **** 2025-09-27 21:46:49.139029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-27 21:46:49.139048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-27 21:46:49.139058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-27 21:46:49.139068 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:46:49.139082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-27 21:46:49.139093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-27 21:46:49.139108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-27 21:46:49.139125 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:46:49.139135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-27 21:46:49.139145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-27 21:46:49.139156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-27 21:46:49.139165 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:46:49.139175 | orchestrator | 2025-09-27 21:46:49.139185 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-09-27 21:46:49.139195 | orchestrator | Saturday 27 September 2025 21:46:04 +0000 (0:00:01.720) 0:01:17.786 **** 2025-09-27 21:46:49.139210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-27 21:46:49.139228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-27 21:46:49.139245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-27 21:46:49.139255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-27 21:46:49.139265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-27 21:46:49.139280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-27 21:46:49.139290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-27 21:46:49.139319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-27 21:46:49.139330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-27 21:46:49.139340 | orchestrator | 2025-09-27 21:46:49.139350 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-27 21:46:49.139360 | orchestrator | Saturday 27 September 2025 21:46:08 +0000 (0:00:03.595) 0:01:21.382 **** 2025-09-27 21:46:49.139369 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:46:49.139379 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:46:49.139389 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:46:49.139398 | orchestrator | 2025-09-27 21:46:49.139408 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-09-27 21:46:49.139418 | orchestrator | Saturday 27 September 2025 21:46:08 +0000 (0:00:00.219) 0:01:21.602 **** 2025-09-27 21:46:49.139427 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:46:49.139437 | orchestrator | 2025-09-27 21:46:49.139446 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-09-27 21:46:49.139456 | orchestrator | Saturday 27 September 2025 21:46:10 +0000 (0:00:02.428) 0:01:24.030 **** 2025-09-27 21:46:49.139465 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:46:49.139475 | orchestrator | 2025-09-27 21:46:49.139484 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-09-27 21:46:49.139494 | orchestrator | Saturday 27 September 2025 21:46:13 +0000 (0:00:02.690) 0:01:26.720 **** 2025-09-27 21:46:49.139504 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:46:49.139513 | orchestrator | 2025-09-27 21:46:49.139523 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-27 21:46:49.139532 | orchestrator | Saturday 27 September 2025 21:46:26 +0000 (0:00:12.542) 0:01:39.263 **** 2025-09-27 21:46:49.139542 | orchestrator | 2025-09-27 21:46:49.139551 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-27 21:46:49.139561 | orchestrator | Saturday 27 September 2025 21:46:26 +0000 (0:00:00.062) 0:01:39.325 **** 2025-09-27 21:46:49.139570 | orchestrator | 2025-09-27 21:46:49.139580 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-27 21:46:49.139589 | orchestrator | Saturday 27 September 2025 21:46:26 +0000 (0:00:00.140) 0:01:39.466 **** 2025-09-27 21:46:49.139599 | orchestrator | 2025-09-27 21:46:49.139608 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-09-27 21:46:49.139618 | orchestrator | Saturday 27 September 2025 21:46:26 +0000 (0:00:00.148) 0:01:39.615 **** 2025-09-27 21:46:49.139628 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:46:49.139637 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:46:49.139647 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:46:49.139662 | orchestrator | 2025-09-27 21:46:49.139672 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-09-27 21:46:49.139682 | orchestrator | Saturday 27 September 2025 21:46:34 +0000 (0:00:08.128) 0:01:47.743 **** 2025-09-27 21:46:49.139691 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:46:49.139701 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:46:49.139711 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:46:49.139720 | orchestrator | 2025-09-27 21:46:49.139730 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-09-27 21:46:49.139739 | orchestrator | Saturday 27 September 2025 21:46:40 +0000 (0:00:05.680) 0:01:53.423 **** 2025-09-27 21:46:49.139749 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:46:49.139758 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:46:49.139772 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:46:49.139782 | orchestrator | 2025-09-27 21:46:49.139792 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:46:49.139802 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-27 21:46:49.139813 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-27 21:46:49.139823 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-27 21:46:49.139833 | orchestrator | 2025-09-27 21:46:49.139842 | orchestrator | 2025-09-27 21:46:49.139852 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:46:49.139878 | orchestrator | Saturday 27 September 2025 21:46:45 +0000 (0:00:05.572) 0:01:58.996 **** 2025-09-27 21:46:49.139888 | orchestrator | =============================================================================== 2025-09-27 21:46:49.139898 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 17.41s 2025-09-27 21:46:49.139912 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.54s 2025-09-27 21:46:49.139922 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.65s 2025-09-27 21:46:49.139932 | orchestrator | barbican : Restart barbican-api container ------------------------------- 8.13s 2025-09-27 21:46:49.139941 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 8.12s 2025-09-27 21:46:49.139951 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 5.68s 2025-09-27 21:46:49.139960 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 5.57s 2025-09-27 21:46:49.139970 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.84s 2025-09-27 21:46:49.139979 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.20s 2025-09-27 21:46:49.139989 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.11s 2025-09-27 21:46:49.139998 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 4.01s 2025-09-27 21:46:49.140008 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.75s 2025-09-27 21:46:49.140017 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.70s 2025-09-27 21:46:49.140027 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.60s 2025-09-27 21:46:49.140036 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 3.12s 2025-09-27 21:46:49.140046 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.99s 2025-09-27 21:46:49.140055 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.69s 2025-09-27 21:46:49.140065 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.43s 2025-09-27 21:46:49.140075 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.89s 2025-09-27 21:46:49.140091 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.76s 2025-09-27 21:46:49.140101 | orchestrator | 2025-09-27 21:46:49 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:46:49.140111 | orchestrator | 2025-09-27 21:46:49 | INFO  | Task 5596e0dc-5954-4326-9d13-e126b0f724ef is in state STARTED 2025-09-27 21:46:49.140121 | orchestrator | 2025-09-27 21:46:49 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:46:49.140130 | orchestrator | 2025-09-27 21:46:49 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:46:52.173750 | orchestrator | 2025-09-27 21:46:52 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:46:52.173916 | orchestrator | 2025-09-27 21:46:52 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:46:52.174331 | orchestrator | 2025-09-27 21:46:52 | INFO  | Task 5596e0dc-5954-4326-9d13-e126b0f724ef is in state STARTED 2025-09-27 21:46:52.175928 | orchestrator | 2025-09-27 21:46:52 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:46:52.175997 | orchestrator | 2025-09-27 21:46:52 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:46:55.200308 | orchestrator | 2025-09-27 21:46:55 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:46:55.200536 | orchestrator | 2025-09-27 21:46:55 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:46:55.201214 | orchestrator | 2025-09-27 21:46:55 | INFO  | Task 5596e0dc-5954-4326-9d13-e126b0f724ef is in state STARTED 2025-09-27 21:46:55.201916 | orchestrator | 2025-09-27 21:46:55 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:46:55.201941 | orchestrator | 2025-09-27 21:46:55 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:46:58.226217 | orchestrator | 2025-09-27 21:46:58 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:46:58.226470 | orchestrator | 2025-09-27 21:46:58 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:46:58.227201 | orchestrator | 2025-09-27 21:46:58 | INFO  | Task 5596e0dc-5954-4326-9d13-e126b0f724ef is in state STARTED 2025-09-27 21:46:58.228185 | orchestrator | 2025-09-27 21:46:58 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:46:58.228222 | orchestrator | 2025-09-27 21:46:58 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:47:01.271834 | orchestrator | 2025-09-27 21:47:01 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:47:01.272109 | orchestrator | 2025-09-27 21:47:01 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:47:01.273170 | orchestrator | 2025-09-27 21:47:01 | INFO  | Task 5596e0dc-5954-4326-9d13-e126b0f724ef is in state STARTED 2025-09-27 21:47:01.274197 | orchestrator | 2025-09-27 21:47:01 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:47:01.274224 | orchestrator | 2025-09-27 21:47:01 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:47:04.368755 | orchestrator | 2025-09-27 21:47:04 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:47:04.369103 | orchestrator | 2025-09-27 21:47:04 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:47:04.369893 | orchestrator | 2025-09-27 21:47:04 | INFO  | Task 5596e0dc-5954-4326-9d13-e126b0f724ef is in state STARTED 2025-09-27 21:47:04.370726 | orchestrator | 2025-09-27 21:47:04 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:47:04.370917 | orchestrator | 2025-09-27 21:47:04 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:47:07.404994 | orchestrator | 2025-09-27 21:47:07 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:47:07.405180 | orchestrator | 2025-09-27 21:47:07 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:47:07.405680 | orchestrator | 2025-09-27 21:47:07 | INFO  | Task 5596e0dc-5954-4326-9d13-e126b0f724ef is in state STARTED 2025-09-27 21:47:07.406895 | orchestrator | 2025-09-27 21:47:07 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:47:07.407063 | orchestrator | 2025-09-27 21:47:07 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:47:10.437424 | orchestrator | 2025-09-27 21:47:10 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:47:10.438159 | orchestrator | 2025-09-27 21:47:10 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:47:10.439087 | orchestrator | 2025-09-27 21:47:10 | INFO  | Task 5596e0dc-5954-4326-9d13-e126b0f724ef is in state STARTED 2025-09-27 21:47:10.440053 | orchestrator | 2025-09-27 21:47:10 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:47:10.440192 | orchestrator | 2025-09-27 21:47:10 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:47:13.481403 | orchestrator | 2025-09-27 21:47:13 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:47:13.481942 | orchestrator | 2025-09-27 21:47:13 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:47:13.482373 | orchestrator | 2025-09-27 21:47:13 | INFO  | Task 5596e0dc-5954-4326-9d13-e126b0f724ef is in state STARTED 2025-09-27 21:47:13.483068 | orchestrator | 2025-09-27 21:47:13 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:47:13.483109 | orchestrator | 2025-09-27 21:47:13 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:47:16.529718 | orchestrator | 2025-09-27 21:47:16 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:47:16.532927 | orchestrator | 2025-09-27 21:47:16 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:47:16.535996 | orchestrator | 2025-09-27 21:47:16 | INFO  | Task 5596e0dc-5954-4326-9d13-e126b0f724ef is in state STARTED 2025-09-27 21:47:16.538705 | orchestrator | 2025-09-27 21:47:16 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:47:16.539253 | orchestrator | 2025-09-27 21:47:16 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:47:19.579585 | orchestrator | 2025-09-27 21:47:19 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:47:19.579672 | orchestrator | 2025-09-27 21:47:19 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:47:19.579687 | orchestrator | 2025-09-27 21:47:19 | INFO  | Task 5596e0dc-5954-4326-9d13-e126b0f724ef is in state STARTED 2025-09-27 21:47:19.579698 | orchestrator | 2025-09-27 21:47:19 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:47:19.579709 | orchestrator | 2025-09-27 21:47:19 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:47:22.604479 | orchestrator | 2025-09-27 21:47:22 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:47:22.606168 | orchestrator | 2025-09-27 21:47:22 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:47:22.607710 | orchestrator | 2025-09-27 21:47:22 | INFO  | Task 5596e0dc-5954-4326-9d13-e126b0f724ef is in state STARTED 2025-09-27 21:47:22.611594 | orchestrator | 2025-09-27 21:47:22 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:47:22.611652 | orchestrator | 2025-09-27 21:47:22 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:47:25.655671 | orchestrator | 2025-09-27 21:47:25 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:47:25.656738 | orchestrator | 2025-09-27 21:47:25 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:47:25.658746 | orchestrator | 2025-09-27 21:47:25 | INFO  | Task 5596e0dc-5954-4326-9d13-e126b0f724ef is in state STARTED 2025-09-27 21:47:25.660734 | orchestrator | 2025-09-27 21:47:25 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:47:25.660986 | orchestrator | 2025-09-27 21:47:25 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:47:28.709984 | orchestrator | 2025-09-27 21:47:28 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:47:28.711549 | orchestrator | 2025-09-27 21:47:28 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:47:28.713407 | orchestrator | 2025-09-27 21:47:28 | INFO  | Task 5596e0dc-5954-4326-9d13-e126b0f724ef is in state STARTED 2025-09-27 21:47:28.715176 | orchestrator | 2025-09-27 21:47:28 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:47:28.715251 | orchestrator | 2025-09-27 21:47:28 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:47:31.756041 | orchestrator | 2025-09-27 21:47:31 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:47:31.757802 | orchestrator | 2025-09-27 21:47:31 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:47:31.760864 | orchestrator | 2025-09-27 21:47:31 | INFO  | Task 5596e0dc-5954-4326-9d13-e126b0f724ef is in state STARTED 2025-09-27 21:47:31.762945 | orchestrator | 2025-09-27 21:47:31 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:47:31.763002 | orchestrator | 2025-09-27 21:47:31 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:47:34.816073 | orchestrator | 2025-09-27 21:47:34 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:47:34.819264 | orchestrator | 2025-09-27 21:47:34 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:47:34.821246 | orchestrator | 2025-09-27 21:47:34 | INFO  | Task 5596e0dc-5954-4326-9d13-e126b0f724ef is in state STARTED 2025-09-27 21:47:34.823656 | orchestrator | 2025-09-27 21:47:34 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:47:34.823719 | orchestrator | 2025-09-27 21:47:34 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:47:37.874714 | orchestrator | 2025-09-27 21:47:37 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:47:37.875500 | orchestrator | 2025-09-27 21:47:37 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:47:37.877311 | orchestrator | 2025-09-27 21:47:37 | INFO  | Task 5596e0dc-5954-4326-9d13-e126b0f724ef is in state STARTED 2025-09-27 21:47:37.878557 | orchestrator | 2025-09-27 21:47:37 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:47:37.878585 | orchestrator | 2025-09-27 21:47:37 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:47:40.926313 | orchestrator | 2025-09-27 21:47:40 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:47:40.927540 | orchestrator | 2025-09-27 21:47:40 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:47:40.929268 | orchestrator | 2025-09-27 21:47:40 | INFO  | Task 5596e0dc-5954-4326-9d13-e126b0f724ef is in state STARTED 2025-09-27 21:47:40.930979 | orchestrator | 2025-09-27 21:47:40 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:47:40.931010 | orchestrator | 2025-09-27 21:47:40 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:47:43.975537 | orchestrator | 2025-09-27 21:47:43 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:47:43.977418 | orchestrator | 2025-09-27 21:47:43 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:47:43.979935 | orchestrator | 2025-09-27 21:47:43 | INFO  | Task 5596e0dc-5954-4326-9d13-e126b0f724ef is in state STARTED 2025-09-27 21:47:43.981216 | orchestrator | 2025-09-27 21:47:43 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:47:43.981283 | orchestrator | 2025-09-27 21:47:43 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:47:47.017776 | orchestrator | 2025-09-27 21:47:47 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:47:47.020499 | orchestrator | 2025-09-27 21:47:47 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:47:47.022386 | orchestrator | 2025-09-27 21:47:47 | INFO  | Task 5596e0dc-5954-4326-9d13-e126b0f724ef is in state STARTED 2025-09-27 21:47:47.023751 | orchestrator | 2025-09-27 21:47:47 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:47:47.023776 | orchestrator | 2025-09-27 21:47:47 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:47:50.066649 | orchestrator | 2025-09-27 21:47:50 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:47:50.070071 | orchestrator | 2025-09-27 21:47:50 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:47:50.072069 | orchestrator | 2025-09-27 21:47:50 | INFO  | Task 5596e0dc-5954-4326-9d13-e126b0f724ef is in state STARTED 2025-09-27 21:47:50.073628 | orchestrator | 2025-09-27 21:47:50 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:47:50.073651 | orchestrator | 2025-09-27 21:47:50 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:47:53.116879 | orchestrator | 2025-09-27 21:47:53 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state STARTED 2025-09-27 21:47:53.119520 | orchestrator | 2025-09-27 21:47:53 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:47:53.121600 | orchestrator | 2025-09-27 21:47:53 | INFO  | Task 5596e0dc-5954-4326-9d13-e126b0f724ef is in state STARTED 2025-09-27 21:47:53.123679 | orchestrator | 2025-09-27 21:47:53 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:47:53.123744 | orchestrator | 2025-09-27 21:47:53 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:47:56.176672 | orchestrator | 2025-09-27 21:47:56 | INFO  | Task f3933917-113e-40af-8e5c-dbb7195fbf3f is in state SUCCESS 2025-09-27 21:47:56.178737 | orchestrator | 2025-09-27 21:47:56.178795 | orchestrator | 2025-09-27 21:47:56.178809 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 21:47:56.178851 | orchestrator | 2025-09-27 21:47:56.178863 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 21:47:56.178987 | orchestrator | Saturday 27 September 2025 21:44:47 +0000 (0:00:00.410) 0:00:00.410 **** 2025-09-27 21:47:56.179292 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:47:56.179368 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:47:56.179380 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:47:56.179391 | orchestrator | 2025-09-27 21:47:56.179402 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 21:47:56.179413 | orchestrator | Saturday 27 September 2025 21:44:48 +0000 (0:00:00.383) 0:00:00.794 **** 2025-09-27 21:47:56.179425 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-09-27 21:47:56.179437 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-09-27 21:47:56.179448 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-09-27 21:47:56.179458 | orchestrator | 2025-09-27 21:47:56.179470 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-09-27 21:47:56.179481 | orchestrator | 2025-09-27 21:47:56.179491 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-27 21:47:56.179502 | orchestrator | Saturday 27 September 2025 21:44:48 +0000 (0:00:00.599) 0:00:01.393 **** 2025-09-27 21:47:56.179513 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:47:56.179525 | orchestrator | 2025-09-27 21:47:56.179536 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-09-27 21:47:56.179547 | orchestrator | Saturday 27 September 2025 21:44:49 +0000 (0:00:00.837) 0:00:02.231 **** 2025-09-27 21:47:56.179573 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-09-27 21:47:56.179585 | orchestrator | 2025-09-27 21:47:56.179596 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-09-27 21:47:56.179606 | orchestrator | Saturday 27 September 2025 21:44:53 +0000 (0:00:03.901) 0:00:06.132 **** 2025-09-27 21:47:56.179617 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-09-27 21:47:56.179663 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-09-27 21:47:56.179675 | orchestrator | 2025-09-27 21:47:56.179686 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-09-27 21:47:56.179698 | orchestrator | Saturday 27 September 2025 21:45:00 +0000 (0:00:07.586) 0:00:13.719 **** 2025-09-27 21:47:56.179709 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-09-27 21:47:56.179721 | orchestrator | 2025-09-27 21:47:56.179732 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-09-27 21:47:56.179742 | orchestrator | Saturday 27 September 2025 21:45:04 +0000 (0:00:03.878) 0:00:17.597 **** 2025-09-27 21:47:56.179785 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-27 21:47:56.179796 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-09-27 21:47:56.179807 | orchestrator | 2025-09-27 21:47:56.179844 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-09-27 21:47:56.179855 | orchestrator | Saturday 27 September 2025 21:45:08 +0000 (0:00:04.126) 0:00:21.724 **** 2025-09-27 21:47:56.179866 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-27 21:47:56.179878 | orchestrator | 2025-09-27 21:47:56.179888 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-09-27 21:47:56.179899 | orchestrator | Saturday 27 September 2025 21:45:12 +0000 (0:00:03.530) 0:00:25.254 **** 2025-09-27 21:47:56.179910 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-09-27 21:47:56.179920 | orchestrator | 2025-09-27 21:47:56.179931 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-09-27 21:47:56.179942 | orchestrator | Saturday 27 September 2025 21:45:17 +0000 (0:00:04.598) 0:00:29.853 **** 2025-09-27 21:47:56.179957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-27 21:47:56.180012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-27 21:47:56.180033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-27 21:47:56.180083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 21:47:56.180097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 21:47:56.180108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 21:47:56.180129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.180151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.180163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.180181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.180194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.180206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.180217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.180235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.180255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.180267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.180284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.180295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.180307 | orchestrator | 2025-09-27 21:47:56.180318 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-09-27 21:47:56.180329 | orchestrator | Saturday 27 September 2025 21:45:20 +0000 (0:00:03.063) 0:00:32.917 **** 2025-09-27 21:47:56.180341 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:47:56.180352 | orchestrator | 2025-09-27 21:47:56.180363 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-09-27 21:47:56.180382 | orchestrator | Saturday 27 September 2025 21:45:20 +0000 (0:00:00.139) 0:00:33.056 **** 2025-09-27 21:47:56.180393 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:47:56.180404 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:47:56.180414 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:47:56.180425 | orchestrator | 2025-09-27 21:47:56.180436 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-27 21:47:56.180447 | orchestrator | Saturday 27 September 2025 21:45:20 +0000 (0:00:00.296) 0:00:33.353 **** 2025-09-27 21:47:56.180458 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:47:56.180469 | orchestrator | 2025-09-27 21:47:56.180480 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-09-27 21:47:56.180491 | orchestrator | Saturday 27 September 2025 21:45:21 +0000 (0:00:00.714) 0:00:34.067 **** 2025-09-27 21:47:56.180503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-27 21:47:56.180528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-27 21:47:56.180546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-27 21:47:56.180558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 21:47:56.180576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 21:47:56.180588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 21:47:56.180605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.180617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.180628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.180644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.180662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.180673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.180685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.180701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.180713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.180724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.180740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.180766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.180777 | orchestrator | 2025-09-27 21:47:56.180788 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-09-27 21:47:56.180799 | orchestrator | Saturday 27 September 2025 21:45:27 +0000 (0:00:06.588) 0:00:40.655 **** 2025-09-27 21:47:56.180837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-27 21:47:56.180859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-27 21:47:56.180889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.180909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.180928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.180948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.180959 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:47:56.180971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-27 21:47:56.180983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-27 21:47:56.181001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.181013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.181029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.181047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.181058 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:47:56.181070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-27 21:47:56.181081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-27 21:47:56.181102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.181114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.181131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.181153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.181164 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:47:56.181175 | orchestrator | 2025-09-27 21:47:56.181186 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-09-27 21:47:56.181197 | orchestrator | Saturday 27 September 2025 21:45:28 +0000 (0:00:00.874) 0:00:41.530 **** 2025-09-27 21:47:56.181208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-27 21:47:56.181220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-27 21:47:56.181237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.181249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.181272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.181283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.181295 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:47:56.181306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-27 21:47:56.181317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-27 21:47:56.181334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.181373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.181397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.181409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.181420 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:47:56.181431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-27 21:47:56.181442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-27 21:47:56.181454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.181472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.181491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.181507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.181518 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:47:56.181529 | orchestrator | 2025-09-27 21:47:56.181540 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-09-27 21:47:56.181551 | orchestrator | Saturday 27 September 2025 21:45:30 +0000 (0:00:01.401) 0:00:42.932 **** 2025-09-27 21:47:56.181563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-27 21:47:56.181574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-27 21:47:56.181593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-27 21:47:56.181611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 21:47:56.181627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 21:47:56.181639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 21:47:56.181650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.181661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.181677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.181699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.181711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.181727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.181739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.181750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.181761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.181778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.181796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.181807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.181893 | orchestrator | 2025-09-27 21:47:56.181910 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-09-27 21:47:56.181922 | orchestrator | Saturday 27 September 2025 21:45:38 +0000 (0:00:08.124) 0:00:51.056 **** 2025-09-27 21:47:56.181933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-27 21:47:56.181945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-27 21:47:56.181956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-27 21:47:56.181983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 21:47:56.181995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 21:47:56.182011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 21:47:56.182072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.182084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.182095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.182122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.182135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.182151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.182163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.182174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.182186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.182203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.182220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.182232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.182243 | orchestrator | 2025-09-27 21:47:56.182254 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-09-27 21:47:56.182265 | orchestrator | Saturday 27 September 2025 21:46:02 +0000 (0:00:23.874) 0:01:14.931 **** 2025-09-27 21:47:56.182276 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-27 21:47:56.182287 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-27 21:47:56.182297 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-27 21:47:56.182308 | orchestrator | 2025-09-27 21:47:56.182324 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-09-27 21:47:56.182335 | orchestrator | Saturday 27 September 2025 21:46:09 +0000 (0:00:07.009) 0:01:21.940 **** 2025-09-27 21:47:56.182346 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-27 21:47:56.182356 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-27 21:47:56.182367 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-27 21:47:56.182378 | orchestrator | 2025-09-27 21:47:56.182389 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-09-27 21:47:56.182399 | orchestrator | Saturday 27 September 2025 21:46:12 +0000 (0:00:03.319) 0:01:25.260 **** 2025-09-27 21:47:56.182409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-27 21:47:56.182426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-27 21:47:56.182443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-27 21:47:56.182454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 21:47:56.182468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.182478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.182488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.182503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 21:47:56.182513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.182530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.182541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.182556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 21:47:56.182566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.182584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.182594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.182604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.182620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.182630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.182640 | orchestrator | 2025-09-27 21:47:56.182650 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-09-27 21:47:56.182667 | orchestrator | Saturday 27 September 2025 21:46:16 +0000 (0:00:03.688) 0:01:28.948 **** 2025-09-27 21:47:56.182677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-27 21:47:56.182693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-27 21:47:56.182703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-27 21:47:56.182718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 21:47:56.182729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.182753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.182763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.182780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 21:47:56.182790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.182800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.182840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.182852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 21:47:56.182866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.182883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.182893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.182903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.182918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.182929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.182938 | orchestrator | 2025-09-27 21:47:56.182948 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-27 21:47:56.182958 | orchestrator | Saturday 27 September 2025 21:46:19 +0000 (0:00:03.372) 0:01:32.321 **** 2025-09-27 21:47:56.182967 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:47:56.182977 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:47:56.182987 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:47:56.183020 | orchestrator | 2025-09-27 21:47:56.183030 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-09-27 21:47:56.183040 | orchestrator | Saturday 27 September 2025 21:46:20 +0000 (0:00:00.475) 0:01:32.797 **** 2025-09-27 21:47:56.183054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-27 21:47:56.183072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-27 21:47:56.183082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.183093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.183109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.183119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.183129 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:47:56.183150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-27 21:47:56.183161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-27 21:47:56.183171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.183181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.183196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.183206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.183216 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:47:56.183237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-27 21:47:56.183248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-27 21:47:56.183258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.183268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.183278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.183294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-27 21:47:56.183311 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:47:56.183321 | orchestrator | 2025-09-27 21:47:56.183330 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-09-27 21:47:56.183340 | orchestrator | Saturday 27 September 2025 21:46:21 +0000 (0:00:01.815) 0:01:34.612 **** 2025-09-27 21:47:56.183354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-27 21:47:56.183365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-27 21:47:56.183375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-27 21:47:56.183385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 21:47:56.183400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 21:47:56.183417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-27 21:47:56.183435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.183445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.183455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.183465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.183481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.183491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.183512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.183522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.183532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.183542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.183552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.183568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-27 21:47:56.183584 | orchestrator | 2025-09-27 21:47:56.183594 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-27 21:47:56.183604 | orchestrator | Saturday 27 September 2025 21:46:27 +0000 (0:00:05.534) 0:01:40.147 **** 2025-09-27 21:47:56.183613 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:47:56.183623 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:47:56.183633 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:47:56.183642 | orchestrator | 2025-09-27 21:47:56.183651 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-09-27 21:47:56.183661 | orchestrator | Saturday 27 September 2025 21:46:27 +0000 (0:00:00.583) 0:01:40.731 **** 2025-09-27 21:47:56.183671 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-09-27 21:47:56.183680 | orchestrator | 2025-09-27 21:47:56.183690 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-09-27 21:47:56.183699 | orchestrator | Saturday 27 September 2025 21:46:30 +0000 (0:00:02.690) 0:01:43.421 **** 2025-09-27 21:47:56.183709 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-27 21:47:56.183719 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-09-27 21:47:56.183728 | orchestrator | 2025-09-27 21:47:56.183738 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-09-27 21:47:56.183747 | orchestrator | Saturday 27 September 2025 21:46:33 +0000 (0:00:02.446) 0:01:45.867 **** 2025-09-27 21:47:56.183757 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:47:56.183766 | orchestrator | 2025-09-27 21:47:56.183776 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-27 21:47:56.183790 | orchestrator | Saturday 27 September 2025 21:46:49 +0000 (0:00:16.196) 0:02:02.064 **** 2025-09-27 21:47:56.183800 | orchestrator | 2025-09-27 21:47:56.183830 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-27 21:47:56.183841 | orchestrator | Saturday 27 September 2025 21:46:49 +0000 (0:00:00.591) 0:02:02.655 **** 2025-09-27 21:47:56.183851 | orchestrator | 2025-09-27 21:47:56.183861 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-27 21:47:56.183870 | orchestrator | Saturday 27 September 2025 21:46:50 +0000 (0:00:00.154) 0:02:02.810 **** 2025-09-27 21:47:56.183880 | orchestrator | 2025-09-27 21:47:56.183889 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-09-27 21:47:56.183899 | orchestrator | Saturday 27 September 2025 21:46:50 +0000 (0:00:00.160) 0:02:02.971 **** 2025-09-27 21:47:56.183908 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:47:56.183918 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:47:56.183927 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:47:56.183937 | orchestrator | 2025-09-27 21:47:56.183946 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-09-27 21:47:56.183956 | orchestrator | Saturday 27 September 2025 21:47:03 +0000 (0:00:13.654) 0:02:16.625 **** 2025-09-27 21:47:56.183965 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:47:56.183975 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:47:56.183984 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:47:56.183994 | orchestrator | 2025-09-27 21:47:56.184003 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-09-27 21:47:56.184013 | orchestrator | Saturday 27 September 2025 21:47:10 +0000 (0:00:06.381) 0:02:23.007 **** 2025-09-27 21:47:56.184022 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:47:56.184032 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:47:56.184041 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:47:56.184051 | orchestrator | 2025-09-27 21:47:56.184060 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-09-27 21:47:56.184070 | orchestrator | Saturday 27 September 2025 21:47:18 +0000 (0:00:08.288) 0:02:31.296 **** 2025-09-27 21:47:56.184086 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:47:56.184095 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:47:56.184105 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:47:56.184114 | orchestrator | 2025-09-27 21:47:56.184124 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-09-27 21:47:56.184133 | orchestrator | Saturday 27 September 2025 21:47:28 +0000 (0:00:10.141) 0:02:41.437 **** 2025-09-27 21:47:56.184143 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:47:56.184152 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:47:56.184162 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:47:56.184171 | orchestrator | 2025-09-27 21:47:56.184181 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-09-27 21:47:56.184191 | orchestrator | Saturday 27 September 2025 21:47:34 +0000 (0:00:05.788) 0:02:47.225 **** 2025-09-27 21:47:56.184200 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:47:56.184210 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:47:56.184219 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:47:56.184228 | orchestrator | 2025-09-27 21:47:56.184238 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-09-27 21:47:56.184247 | orchestrator | Saturday 27 September 2025 21:47:45 +0000 (0:00:10.540) 0:02:57.765 **** 2025-09-27 21:47:56.184257 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:47:56.184266 | orchestrator | 2025-09-27 21:47:56.184276 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:47:56.184286 | orchestrator | testbed-node-0 : ok=29  changed=24  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-27 21:47:56.184296 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-27 21:47:56.184306 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-27 21:47:56.184316 | orchestrator | 2025-09-27 21:47:56.184325 | orchestrator | 2025-09-27 21:47:56.184341 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:47:56.184362 | orchestrator | Saturday 27 September 2025 21:47:52 +0000 (0:00:07.764) 0:03:05.529 **** 2025-09-27 21:47:56.184372 | orchestrator | =============================================================================== 2025-09-27 21:47:56.184382 | orchestrator | designate : Copying over designate.conf -------------------------------- 23.88s 2025-09-27 21:47:56.184392 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.20s 2025-09-27 21:47:56.184401 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.66s 2025-09-27 21:47:56.184411 | orchestrator | designate : Restart designate-worker container ------------------------- 10.54s 2025-09-27 21:47:56.184420 | orchestrator | designate : Restart designate-producer container ----------------------- 10.14s 2025-09-27 21:47:56.184430 | orchestrator | designate : Restart designate-central container ------------------------- 8.29s 2025-09-27 21:47:56.184439 | orchestrator | designate : Copying over config.json files for services ----------------- 8.12s 2025-09-27 21:47:56.184449 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.76s 2025-09-27 21:47:56.184458 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.59s 2025-09-27 21:47:56.184468 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 7.01s 2025-09-27 21:47:56.184477 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.59s 2025-09-27 21:47:56.184487 | orchestrator | designate : Restart designate-api container ----------------------------- 6.38s 2025-09-27 21:47:56.184496 | orchestrator | designate : Restart designate-mdns container ---------------------------- 5.79s 2025-09-27 21:47:56.184511 | orchestrator | designate : Check designate containers ---------------------------------- 5.53s 2025-09-27 21:47:56.184520 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.60s 2025-09-27 21:47:56.184536 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.13s 2025-09-27 21:47:56.184546 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.90s 2025-09-27 21:47:56.184564 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.88s 2025-09-27 21:47:56.184574 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.69s 2025-09-27 21:47:56.184583 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.53s 2025-09-27 21:47:56.184593 | orchestrator | 2025-09-27 21:47:56 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state STARTED 2025-09-27 21:47:56.184603 | orchestrator | 2025-09-27 21:47:56 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:47:56.184750 | orchestrator | 2025-09-27 21:47:56 | INFO  | Task 5596e0dc-5954-4326-9d13-e126b0f724ef is in state STARTED 2025-09-27 21:47:56.184856 | orchestrator | 2025-09-27 21:47:56 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:47:56.184874 | orchestrator | 2025-09-27 21:47:56 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:47:59.244953 | orchestrator | 2025-09-27 21:47:59 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state STARTED 2025-09-27 21:47:59.245062 | orchestrator | 2025-09-27 21:47:59 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:47:59.245560 | orchestrator | 2025-09-27 21:47:59 | INFO  | Task 5596e0dc-5954-4326-9d13-e126b0f724ef is in state STARTED 2025-09-27 21:47:59.246273 | orchestrator | 2025-09-27 21:47:59 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:47:59.246325 | orchestrator | 2025-09-27 21:47:59 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:48:02.304021 | orchestrator | 2025-09-27 21:48:02 | INFO  | Task e2f85a2a-cd68-4cdb-b429-d5c06431aa8a is in state STARTED 2025-09-27 21:48:02.304782 | orchestrator | 2025-09-27 21:48:02 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state STARTED 2025-09-27 21:48:02.305274 | orchestrator | 2025-09-27 21:48:02 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:48:02.306528 | orchestrator | 2025-09-27 21:48:02 | INFO  | Task 5596e0dc-5954-4326-9d13-e126b0f724ef is in state SUCCESS 2025-09-27 21:48:02.307229 | orchestrator | 2025-09-27 21:48:02.307264 | orchestrator | 2025-09-27 21:48:02.307277 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 21:48:02.307290 | orchestrator | 2025-09-27 21:48:02.307301 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 21:48:02.307313 | orchestrator | Saturday 27 September 2025 21:46:51 +0000 (0:00:00.356) 0:00:00.356 **** 2025-09-27 21:48:02.307324 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:48:02.307336 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:48:02.307347 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:48:02.307358 | orchestrator | 2025-09-27 21:48:02.307369 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 21:48:02.307380 | orchestrator | Saturday 27 September 2025 21:46:52 +0000 (0:00:00.661) 0:00:01.017 **** 2025-09-27 21:48:02.307475 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-09-27 21:48:02.307489 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-09-27 21:48:02.307500 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-09-27 21:48:02.307511 | orchestrator | 2025-09-27 21:48:02.307522 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-09-27 21:48:02.307533 | orchestrator | 2025-09-27 21:48:02.307544 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-27 21:48:02.307555 | orchestrator | Saturday 27 September 2025 21:46:53 +0000 (0:00:00.887) 0:00:01.904 **** 2025-09-27 21:48:02.307590 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:48:02.307602 | orchestrator | 2025-09-27 21:48:02.307613 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-09-27 21:48:02.307624 | orchestrator | Saturday 27 September 2025 21:46:53 +0000 (0:00:00.538) 0:00:02.442 **** 2025-09-27 21:48:02.307635 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-09-27 21:48:02.307646 | orchestrator | 2025-09-27 21:48:02.307656 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-09-27 21:48:02.307667 | orchestrator | Saturday 27 September 2025 21:46:57 +0000 (0:00:03.533) 0:00:05.976 **** 2025-09-27 21:48:02.307678 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-09-27 21:48:02.307690 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-09-27 21:48:02.307701 | orchestrator | 2025-09-27 21:48:02.307711 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-09-27 21:48:02.307722 | orchestrator | Saturday 27 September 2025 21:47:04 +0000 (0:00:07.027) 0:00:13.004 **** 2025-09-27 21:48:02.307733 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-27 21:48:02.307744 | orchestrator | 2025-09-27 21:48:02.307767 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-09-27 21:48:02.307778 | orchestrator | Saturday 27 September 2025 21:47:08 +0000 (0:00:03.696) 0:00:16.700 **** 2025-09-27 21:48:02.307790 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-27 21:48:02.307800 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-09-27 21:48:02.307833 | orchestrator | 2025-09-27 21:48:02.307844 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-09-27 21:48:02.307855 | orchestrator | Saturday 27 September 2025 21:47:12 +0000 (0:00:04.044) 0:00:20.745 **** 2025-09-27 21:48:02.307866 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-27 21:48:02.307876 | orchestrator | 2025-09-27 21:48:02.307888 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-09-27 21:48:02.307899 | orchestrator | Saturday 27 September 2025 21:47:15 +0000 (0:00:03.614) 0:00:24.360 **** 2025-09-27 21:48:02.307910 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-09-27 21:48:02.307920 | orchestrator | 2025-09-27 21:48:02.307931 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-27 21:48:02.307942 | orchestrator | Saturday 27 September 2025 21:47:19 +0000 (0:00:04.124) 0:00:28.485 **** 2025-09-27 21:48:02.307953 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:48:02.307964 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:48:02.307974 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:48:02.307985 | orchestrator | 2025-09-27 21:48:02.307996 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-09-27 21:48:02.308007 | orchestrator | Saturday 27 September 2025 21:47:20 +0000 (0:00:00.269) 0:00:28.754 **** 2025-09-27 21:48:02.308021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-27 21:48:02.308058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-27 21:48:02.308070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-27 21:48:02.308082 | orchestrator | 2025-09-27 21:48:02.308093 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-09-27 21:48:02.308105 | orchestrator | Saturday 27 September 2025 21:47:21 +0000 (0:00:01.020) 0:00:29.774 **** 2025-09-27 21:48:02.308118 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:48:02.308131 | orchestrator | 2025-09-27 21:48:02.308148 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-09-27 21:48:02.308161 | orchestrator | Saturday 27 September 2025 21:47:21 +0000 (0:00:00.112) 0:00:29.887 **** 2025-09-27 21:48:02.308173 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:48:02.308185 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:48:02.308197 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:48:02.308209 | orchestrator | 2025-09-27 21:48:02.308221 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-27 21:48:02.308233 | orchestrator | Saturday 27 September 2025 21:47:21 +0000 (0:00:00.418) 0:00:30.306 **** 2025-09-27 21:48:02.308245 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:48:02.308257 | orchestrator | 2025-09-27 21:48:02.308269 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-09-27 21:48:02.308281 | orchestrator | Saturday 27 September 2025 21:47:22 +0000 (0:00:00.495) 0:00:30.801 **** 2025-09-27 21:48:02.308294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-27 21:48:02.308322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-27 21:48:02.308337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-27 21:48:02.308350 | orchestrator | 2025-09-27 21:48:02.308362 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-09-27 21:48:02.308375 | orchestrator | Saturday 27 September 2025 21:47:23 +0000 (0:00:01.559) 0:00:32.361 **** 2025-09-27 21:48:02.308392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-27 21:48:02.308404 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:48:02.308416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-27 21:48:02.308434 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:48:02.308451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-27 21:48:02.308463 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:48:02.308474 | orchestrator | 2025-09-27 21:48:02.308485 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-09-27 21:48:02.308495 | orchestrator | Saturday 27 September 2025 21:47:24 +0000 (0:00:00.859) 0:00:33.221 **** 2025-09-27 21:48:02.308507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-27 21:48:02.308518 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:48:02.308534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-27 21:48:02.308545 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:48:02.308556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-27 21:48:02.308574 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:48:02.308584 | orchestrator | 2025-09-27 21:48:02.308595 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-09-27 21:48:02.308606 | orchestrator | Saturday 27 September 2025 21:47:25 +0000 (0:00:00.711) 0:00:33.932 **** 2025-09-27 21:48:02.308622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-27 21:48:02.308635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-27 21:48:02.308647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-27 21:48:02.308658 | orchestrator | 2025-09-27 21:48:02.308669 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-09-27 21:48:02.308684 | orchestrator | Saturday 27 September 2025 21:47:26 +0000 (0:00:01.404) 0:00:35.337 **** 2025-09-27 21:48:02.308695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-27 21:48:02.308715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-27 21:48:02.308734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-27 21:48:02.308745 | orchestrator | 2025-09-27 21:48:02.308756 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-09-27 21:48:02.308767 | orchestrator | Saturday 27 September 2025 21:47:29 +0000 (0:00:02.306) 0:00:37.643 **** 2025-09-27 21:48:02.308778 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-27 21:48:02.308789 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-27 21:48:02.308800 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-27 21:48:02.308825 | orchestrator | 2025-09-27 21:48:02.308836 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-09-27 21:48:02.308847 | orchestrator | Saturday 27 September 2025 21:47:31 +0000 (0:00:01.984) 0:00:39.628 **** 2025-09-27 21:48:02.308858 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:48:02.308869 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:48:02.308880 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:48:02.308891 | orchestrator | 2025-09-27 21:48:02.308901 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-09-27 21:48:02.308912 | orchestrator | Saturday 27 September 2025 21:47:32 +0000 (0:00:01.273) 0:00:40.902 **** 2025-09-27 21:48:02.308928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-27 21:48:02.308946 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:48:02.308957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-27 21:48:02.308969 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:48:02.308986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-27 21:48:02.308998 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:48:02.309009 | orchestrator | 2025-09-27 21:48:02.309020 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-09-27 21:48:02.309031 | orchestrator | Saturday 27 September 2025 21:47:32 +0000 (0:00:00.508) 0:00:41.410 **** 2025-09-27 21:48:02.309042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-27 21:48:02.309065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-27 21:48:02.309083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-27 21:48:02.309094 | orchestrator | 2025-09-27 21:48:02.309105 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-09-27 21:48:02.309116 | orchestrator | Saturday 27 September 2025 21:47:33 +0000 (0:00:01.059) 0:00:42.470 **** 2025-09-27 21:48:02.309127 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:48:02.309138 | orchestrator | 2025-09-27 21:48:02.309148 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-09-27 21:48:02.309159 | orchestrator | Saturday 27 September 2025 21:47:36 +0000 (0:00:02.373) 0:00:44.843 **** 2025-09-27 21:48:02.309170 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:48:02.309181 | orchestrator | 2025-09-27 21:48:02.309191 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-09-27 21:48:02.309202 | orchestrator | Saturday 27 September 2025 21:47:38 +0000 (0:00:02.075) 0:00:46.918 **** 2025-09-27 21:48:02.309213 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:48:02.309224 | orchestrator | 2025-09-27 21:48:02.309235 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-27 21:48:02.309245 | orchestrator | Saturday 27 September 2025 21:47:53 +0000 (0:00:15.210) 0:01:02.129 **** 2025-09-27 21:48:02.309256 | orchestrator | 2025-09-27 21:48:02.309267 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-27 21:48:02.309278 | orchestrator | Saturday 27 September 2025 21:47:53 +0000 (0:00:00.061) 0:01:02.191 **** 2025-09-27 21:48:02.309288 | orchestrator | 2025-09-27 21:48:02.309305 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-27 21:48:02.309316 | orchestrator | Saturday 27 September 2025 21:47:53 +0000 (0:00:00.068) 0:01:02.259 **** 2025-09-27 21:48:02.309327 | orchestrator | 2025-09-27 21:48:02.309338 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-09-27 21:48:02.309349 | orchestrator | Saturday 27 September 2025 21:47:53 +0000 (0:00:00.074) 0:01:02.334 **** 2025-09-27 21:48:02.309360 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:48:02.309371 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:48:02.309382 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:48:02.309392 | orchestrator | 2025-09-27 21:48:02.309403 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:48:02.309415 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-27 21:48:02.309427 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-27 21:48:02.309443 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-27 21:48:02.309454 | orchestrator | 2025-09-27 21:48:02.309465 | orchestrator | 2025-09-27 21:48:02.309476 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:48:02.309487 | orchestrator | Saturday 27 September 2025 21:47:58 +0000 (0:00:05.238) 0:01:07.572 **** 2025-09-27 21:48:02.309497 | orchestrator | =============================================================================== 2025-09-27 21:48:02.309508 | orchestrator | placement : Running placement bootstrap container ---------------------- 15.21s 2025-09-27 21:48:02.309519 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.03s 2025-09-27 21:48:02.309530 | orchestrator | placement : Restart placement-api container ----------------------------- 5.24s 2025-09-27 21:48:02.309540 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.12s 2025-09-27 21:48:02.309551 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.04s 2025-09-27 21:48:02.309562 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.70s 2025-09-27 21:48:02.309573 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.61s 2025-09-27 21:48:02.309583 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.53s 2025-09-27 21:48:02.309594 | orchestrator | placement : Creating placement databases -------------------------------- 2.37s 2025-09-27 21:48:02.309609 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.31s 2025-09-27 21:48:02.309620 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.08s 2025-09-27 21:48:02.309631 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.99s 2025-09-27 21:48:02.309641 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.56s 2025-09-27 21:48:02.309652 | orchestrator | placement : Copying over config.json files for services ----------------- 1.40s 2025-09-27 21:48:02.309663 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.27s 2025-09-27 21:48:02.309673 | orchestrator | placement : Check placement containers ---------------------------------- 1.06s 2025-09-27 21:48:02.309684 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.02s 2025-09-27 21:48:02.309695 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.88s 2025-09-27 21:48:02.309705 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.86s 2025-09-27 21:48:02.309716 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.71s 2025-09-27 21:48:02.309727 | orchestrator | 2025-09-27 21:48:02 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:48:02.309738 | orchestrator | 2025-09-27 21:48:02 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:48:05.343547 | orchestrator | 2025-09-27 21:48:05 | INFO  | Task e2f85a2a-cd68-4cdb-b429-d5c06431aa8a is in state SUCCESS 2025-09-27 21:48:05.344874 | orchestrator | 2025-09-27 21:48:05 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state STARTED 2025-09-27 21:48:05.346986 | orchestrator | 2025-09-27 21:48:05 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:48:05.349684 | orchestrator | 2025-09-27 21:48:05 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:48:05.349856 | orchestrator | 2025-09-27 21:48:05 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:48:08.397287 | orchestrator | 2025-09-27 21:48:08 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state STARTED 2025-09-27 21:48:08.398280 | orchestrator | 2025-09-27 21:48:08 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:48:08.400796 | orchestrator | 2025-09-27 21:48:08 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:48:08.402219 | orchestrator | 2025-09-27 21:48:08 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:48:08.402500 | orchestrator | 2025-09-27 21:48:08 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:48:11.442435 | orchestrator | 2025-09-27 21:48:11 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state STARTED 2025-09-27 21:48:11.444126 | orchestrator | 2025-09-27 21:48:11 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:48:11.445885 | orchestrator | 2025-09-27 21:48:11 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:48:11.447322 | orchestrator | 2025-09-27 21:48:11 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:48:11.447517 | orchestrator | 2025-09-27 21:48:11 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:48:14.497388 | orchestrator | 2025-09-27 21:48:14 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state STARTED 2025-09-27 21:48:14.498535 | orchestrator | 2025-09-27 21:48:14 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:48:14.500837 | orchestrator | 2025-09-27 21:48:14 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:48:14.502615 | orchestrator | 2025-09-27 21:48:14 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:48:14.502922 | orchestrator | 2025-09-27 21:48:14 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:48:17.542114 | orchestrator | 2025-09-27 21:48:17 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state STARTED 2025-09-27 21:48:17.542404 | orchestrator | 2025-09-27 21:48:17 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:48:17.542435 | orchestrator | 2025-09-27 21:48:17 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:48:17.543499 | orchestrator | 2025-09-27 21:48:17 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:48:17.543535 | orchestrator | 2025-09-27 21:48:17 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:48:20.584171 | orchestrator | 2025-09-27 21:48:20 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state STARTED 2025-09-27 21:48:20.584298 | orchestrator | 2025-09-27 21:48:20 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:48:20.587061 | orchestrator | 2025-09-27 21:48:20 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:48:20.587917 | orchestrator | 2025-09-27 21:48:20 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:48:20.588021 | orchestrator | 2025-09-27 21:48:20 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:48:23.615927 | orchestrator | 2025-09-27 21:48:23 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state STARTED 2025-09-27 21:48:23.616017 | orchestrator | 2025-09-27 21:48:23 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:48:23.616788 | orchestrator | 2025-09-27 21:48:23 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:48:23.620227 | orchestrator | 2025-09-27 21:48:23 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:48:23.620265 | orchestrator | 2025-09-27 21:48:23 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:48:26.659385 | orchestrator | 2025-09-27 21:48:26 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state STARTED 2025-09-27 21:48:26.663115 | orchestrator | 2025-09-27 21:48:26 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:48:26.663668 | orchestrator | 2025-09-27 21:48:26 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:48:26.664479 | orchestrator | 2025-09-27 21:48:26 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:48:26.664504 | orchestrator | 2025-09-27 21:48:26 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:48:29.710831 | orchestrator | 2025-09-27 21:48:29 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state STARTED 2025-09-27 21:48:29.711469 | orchestrator | 2025-09-27 21:48:29 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:48:29.712691 | orchestrator | 2025-09-27 21:48:29 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:48:29.713971 | orchestrator | 2025-09-27 21:48:29 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:48:29.714159 | orchestrator | 2025-09-27 21:48:29 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:48:32.750947 | orchestrator | 2025-09-27 21:48:32 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state STARTED 2025-09-27 21:48:32.751261 | orchestrator | 2025-09-27 21:48:32 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:48:32.752243 | orchestrator | 2025-09-27 21:48:32 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:48:32.753509 | orchestrator | 2025-09-27 21:48:32 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:48:32.753675 | orchestrator | 2025-09-27 21:48:32 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:48:35.792655 | orchestrator | 2025-09-27 21:48:35 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state STARTED 2025-09-27 21:48:35.793252 | orchestrator | 2025-09-27 21:48:35 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:48:35.794166 | orchestrator | 2025-09-27 21:48:35 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:48:35.794887 | orchestrator | 2025-09-27 21:48:35 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:48:35.794914 | orchestrator | 2025-09-27 21:48:35 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:48:38.824336 | orchestrator | 2025-09-27 21:48:38 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state STARTED 2025-09-27 21:48:38.825362 | orchestrator | 2025-09-27 21:48:38 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:48:38.825932 | orchestrator | 2025-09-27 21:48:38 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:48:38.827483 | orchestrator | 2025-09-27 21:48:38 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:48:38.827652 | orchestrator | 2025-09-27 21:48:38 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:48:41.864500 | orchestrator | 2025-09-27 21:48:41 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state STARTED 2025-09-27 21:48:41.864602 | orchestrator | 2025-09-27 21:48:41 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:48:41.864626 | orchestrator | 2025-09-27 21:48:41 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:48:41.864647 | orchestrator | 2025-09-27 21:48:41 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:48:41.864697 | orchestrator | 2025-09-27 21:48:41 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:48:44.879202 | orchestrator | 2025-09-27 21:48:44 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state STARTED 2025-09-27 21:48:44.879366 | orchestrator | 2025-09-27 21:48:44 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:48:44.879489 | orchestrator | 2025-09-27 21:48:44 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:48:44.879889 | orchestrator | 2025-09-27 21:48:44 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:48:44.880197 | orchestrator | 2025-09-27 21:48:44 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:48:47.927938 | orchestrator | 2025-09-27 21:48:47 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state STARTED 2025-09-27 21:48:47.928881 | orchestrator | 2025-09-27 21:48:47 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:48:47.930587 | orchestrator | 2025-09-27 21:48:47 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:48:47.931835 | orchestrator | 2025-09-27 21:48:47 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:48:47.931865 | orchestrator | 2025-09-27 21:48:47 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:48:50.976584 | orchestrator | 2025-09-27 21:48:50 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state STARTED 2025-09-27 21:48:50.978841 | orchestrator | 2025-09-27 21:48:50 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:48:50.980043 | orchestrator | 2025-09-27 21:48:50 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:48:50.981833 | orchestrator | 2025-09-27 21:48:50 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:48:50.981925 | orchestrator | 2025-09-27 21:48:50 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:48:54.010472 | orchestrator | 2025-09-27 21:48:54 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state STARTED 2025-09-27 21:48:54.011561 | orchestrator | 2025-09-27 21:48:54 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:48:54.012428 | orchestrator | 2025-09-27 21:48:54 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:48:54.013544 | orchestrator | 2025-09-27 21:48:54 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:48:54.013573 | orchestrator | 2025-09-27 21:48:54 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:48:57.068909 | orchestrator | 2025-09-27 21:48:57 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state STARTED 2025-09-27 21:48:57.069703 | orchestrator | 2025-09-27 21:48:57 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:48:57.070799 | orchestrator | 2025-09-27 21:48:57 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:48:57.071832 | orchestrator | 2025-09-27 21:48:57 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:48:57.071860 | orchestrator | 2025-09-27 21:48:57 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:49:00.125246 | orchestrator | 2025-09-27 21:49:00 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state STARTED 2025-09-27 21:49:00.128337 | orchestrator | 2025-09-27 21:49:00 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:49:00.129751 | orchestrator | 2025-09-27 21:49:00 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:49:00.131031 | orchestrator | 2025-09-27 21:49:00 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:49:00.131279 | orchestrator | 2025-09-27 21:49:00 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:49:03.172352 | orchestrator | 2025-09-27 21:49:03 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state STARTED 2025-09-27 21:49:03.174710 | orchestrator | 2025-09-27 21:49:03 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:49:03.177147 | orchestrator | 2025-09-27 21:49:03 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:49:03.178792 | orchestrator | 2025-09-27 21:49:03 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:49:03.179167 | orchestrator | 2025-09-27 21:49:03 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:49:06.235601 | orchestrator | 2025-09-27 21:49:06 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state STARTED 2025-09-27 21:49:06.237163 | orchestrator | 2025-09-27 21:49:06 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:49:06.239342 | orchestrator | 2025-09-27 21:49:06 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:49:06.241504 | orchestrator | 2025-09-27 21:49:06 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:49:06.241670 | orchestrator | 2025-09-27 21:49:06 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:49:09.295182 | orchestrator | 2025-09-27 21:49:09 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state STARTED 2025-09-27 21:49:09.295520 | orchestrator | 2025-09-27 21:49:09 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:49:09.297715 | orchestrator | 2025-09-27 21:49:09 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:49:09.299294 | orchestrator | 2025-09-27 21:49:09 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:49:09.299906 | orchestrator | 2025-09-27 21:49:09 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:49:12.345033 | orchestrator | 2025-09-27 21:49:12 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state STARTED 2025-09-27 21:49:12.347023 | orchestrator | 2025-09-27 21:49:12 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state STARTED 2025-09-27 21:49:12.349218 | orchestrator | 2025-09-27 21:49:12 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:49:12.351051 | orchestrator | 2025-09-27 21:49:12 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:49:12.351589 | orchestrator | 2025-09-27 21:49:12 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:49:15.399149 | orchestrator | 2025-09-27 21:49:15 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state STARTED 2025-09-27 21:49:15.405635 | orchestrator | 2025-09-27 21:49:15 | INFO  | Task 85b02bc4-7dc9-40c6-ab4a-b22ab2bb6ad5 is in state STARTED 2025-09-27 21:49:15.411889 | orchestrator | 2025-09-27 21:49:15 | INFO  | Task 57ed024b-854f-46c6-8bd0-4fd6255eb7cd is in state SUCCESS 2025-09-27 21:49:15.413198 | orchestrator | 2025-09-27 21:49:15.413271 | orchestrator | 2025-09-27 21:49:15.413284 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 21:49:15.413293 | orchestrator | 2025-09-27 21:49:15.413299 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 21:49:15.413306 | orchestrator | Saturday 27 September 2025 21:48:03 +0000 (0:00:00.161) 0:00:00.161 **** 2025-09-27 21:49:15.413378 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:49:15.413396 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:49:15.413403 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:49:15.413409 | orchestrator | 2025-09-27 21:49:15.413416 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 21:49:15.413422 | orchestrator | Saturday 27 September 2025 21:48:03 +0000 (0:00:00.304) 0:00:00.466 **** 2025-09-27 21:49:15.413429 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-27 21:49:15.413437 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-27 21:49:15.413443 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-27 21:49:15.413450 | orchestrator | 2025-09-27 21:49:15.413457 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-09-27 21:49:15.413464 | orchestrator | 2025-09-27 21:49:15.413470 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-09-27 21:49:15.413476 | orchestrator | Saturday 27 September 2025 21:48:03 +0000 (0:00:00.531) 0:00:00.997 **** 2025-09-27 21:49:15.413483 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:49:15.413490 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:49:15.413497 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:49:15.413566 | orchestrator | 2025-09-27 21:49:15.413573 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:49:15.413582 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:49:15.413591 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:49:15.413597 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:49:15.413604 | orchestrator | 2025-09-27 21:49:15.413610 | orchestrator | 2025-09-27 21:49:15.413617 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:49:15.413624 | orchestrator | Saturday 27 September 2025 21:48:04 +0000 (0:00:00.724) 0:00:01.722 **** 2025-09-27 21:49:15.413643 | orchestrator | =============================================================================== 2025-09-27 21:49:15.413650 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.72s 2025-09-27 21:49:15.413657 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.53s 2025-09-27 21:49:15.413664 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2025-09-27 21:49:15.413671 | orchestrator | 2025-09-27 21:49:15.413678 | orchestrator | 2025-09-27 21:49:15.413684 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 21:49:15.413692 | orchestrator | 2025-09-27 21:49:15.413699 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 21:49:15.413707 | orchestrator | Saturday 27 September 2025 21:44:47 +0000 (0:00:00.364) 0:00:00.364 **** 2025-09-27 21:49:15.413713 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:49:15.413721 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:49:15.413775 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:49:15.413781 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:49:15.413788 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:49:15.413795 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:49:15.413801 | orchestrator | 2025-09-27 21:49:15.413809 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 21:49:15.413817 | orchestrator | Saturday 27 September 2025 21:44:48 +0000 (0:00:00.759) 0:00:01.124 **** 2025-09-27 21:49:15.413825 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-09-27 21:49:15.413834 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-09-27 21:49:15.413840 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-09-27 21:49:15.413847 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-09-27 21:49:15.413865 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-09-27 21:49:15.413872 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-09-27 21:49:15.413878 | orchestrator | 2025-09-27 21:49:15.413884 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-09-27 21:49:15.413891 | orchestrator | 2025-09-27 21:49:15.413897 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-27 21:49:15.413903 | orchestrator | Saturday 27 September 2025 21:44:49 +0000 (0:00:00.820) 0:00:01.945 **** 2025-09-27 21:49:15.413910 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:49:15.413919 | orchestrator | 2025-09-27 21:49:15.413925 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-09-27 21:49:15.413932 | orchestrator | Saturday 27 September 2025 21:44:50 +0000 (0:00:01.082) 0:00:03.027 **** 2025-09-27 21:49:15.413939 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:49:15.413947 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:49:15.413954 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:49:15.413961 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:49:15.413968 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:49:15.413975 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:49:15.413982 | orchestrator | 2025-09-27 21:49:15.413989 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-09-27 21:49:15.413996 | orchestrator | Saturday 27 September 2025 21:44:51 +0000 (0:00:01.291) 0:00:04.319 **** 2025-09-27 21:49:15.414003 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:49:15.414010 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:49:15.414056 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:49:15.414065 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:49:15.414073 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:49:15.414095 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:49:15.414102 | orchestrator | 2025-09-27 21:49:15.414110 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-09-27 21:49:15.414117 | orchestrator | Saturday 27 September 2025 21:44:52 +0000 (0:00:01.032) 0:00:05.351 **** 2025-09-27 21:49:15.414125 | orchestrator | ok: [testbed-node-0] => { 2025-09-27 21:49:15.414135 | orchestrator |  "changed": false, 2025-09-27 21:49:15.414142 | orchestrator |  "msg": "All assertions passed" 2025-09-27 21:49:15.414157 | orchestrator | } 2025-09-27 21:49:15.414164 | orchestrator | ok: [testbed-node-1] => { 2025-09-27 21:49:15.414171 | orchestrator |  "changed": false, 2025-09-27 21:49:15.414176 | orchestrator |  "msg": "All assertions passed" 2025-09-27 21:49:15.414182 | orchestrator | } 2025-09-27 21:49:15.414188 | orchestrator | ok: [testbed-node-2] => { 2025-09-27 21:49:15.414195 | orchestrator |  "changed": false, 2025-09-27 21:49:15.414201 | orchestrator |  "msg": "All assertions passed" 2025-09-27 21:49:15.414208 | orchestrator | } 2025-09-27 21:49:15.414214 | orchestrator | ok: [testbed-node-3] => { 2025-09-27 21:49:15.414221 | orchestrator |  "changed": false, 2025-09-27 21:49:15.414228 | orchestrator |  "msg": "All assertions passed" 2025-09-27 21:49:15.414234 | orchestrator | } 2025-09-27 21:49:15.414241 | orchestrator | ok: [testbed-node-4] => { 2025-09-27 21:49:15.414247 | orchestrator |  "changed": false, 2025-09-27 21:49:15.414253 | orchestrator |  "msg": "All assertions passed" 2025-09-27 21:49:15.414259 | orchestrator | } 2025-09-27 21:49:15.414265 | orchestrator | ok: [testbed-node-5] => { 2025-09-27 21:49:15.414270 | orchestrator |  "changed": false, 2025-09-27 21:49:15.414276 | orchestrator |  "msg": "All assertions passed" 2025-09-27 21:49:15.414283 | orchestrator | } 2025-09-27 21:49:15.414290 | orchestrator | 2025-09-27 21:49:15.414296 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-09-27 21:49:15.414302 | orchestrator | Saturday 27 September 2025 21:44:53 +0000 (0:00:00.944) 0:00:06.296 **** 2025-09-27 21:49:15.414308 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:15.414315 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:15.414331 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:15.414337 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:49:15.414343 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:49:15.414350 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:49:15.414356 | orchestrator | 2025-09-27 21:49:15.414362 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-09-27 21:49:15.414368 | orchestrator | Saturday 27 September 2025 21:44:54 +0000 (0:00:00.673) 0:00:06.969 **** 2025-09-27 21:49:15.414375 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-09-27 21:49:15.414381 | orchestrator | 2025-09-27 21:49:15.414388 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-09-27 21:49:15.414403 | orchestrator | Saturday 27 September 2025 21:44:57 +0000 (0:00:03.567) 0:00:10.536 **** 2025-09-27 21:49:15.414410 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-09-27 21:49:15.414427 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-09-27 21:49:15.414434 | orchestrator | 2025-09-27 21:49:15.414441 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-09-27 21:49:15.414449 | orchestrator | Saturday 27 September 2025 21:45:05 +0000 (0:00:07.474) 0:00:18.010 **** 2025-09-27 21:49:15.414456 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-27 21:49:15.414463 | orchestrator | 2025-09-27 21:49:15.414471 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-09-27 21:49:15.414478 | orchestrator | Saturday 27 September 2025 21:45:08 +0000 (0:00:03.483) 0:00:21.494 **** 2025-09-27 21:49:15.414485 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-27 21:49:15.414492 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-09-27 21:49:15.414499 | orchestrator | 2025-09-27 21:49:15.414506 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-09-27 21:49:15.414513 | orchestrator | Saturday 27 September 2025 21:45:13 +0000 (0:00:04.079) 0:00:25.573 **** 2025-09-27 21:49:15.414520 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-27 21:49:15.414528 | orchestrator | 2025-09-27 21:49:15.414536 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-09-27 21:49:15.414544 | orchestrator | Saturday 27 September 2025 21:45:16 +0000 (0:00:03.713) 0:00:29.287 **** 2025-09-27 21:49:15.414551 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-09-27 21:49:15.414559 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-09-27 21:49:15.414566 | orchestrator | 2025-09-27 21:49:15.414573 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-27 21:49:15.414580 | orchestrator | Saturday 27 September 2025 21:45:25 +0000 (0:00:08.442) 0:00:37.730 **** 2025-09-27 21:49:15.414587 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:15.414594 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:15.414600 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:15.414607 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:49:15.414614 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:49:15.414621 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:49:15.414628 | orchestrator | 2025-09-27 21:49:15.414635 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-09-27 21:49:15.414642 | orchestrator | Saturday 27 September 2025 21:45:25 +0000 (0:00:00.767) 0:00:38.497 **** 2025-09-27 21:49:15.414649 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:15.414656 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:15.414664 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:15.414671 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:49:15.414678 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:49:15.414686 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:49:15.414693 | orchestrator | 2025-09-27 21:49:15.414700 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-09-27 21:49:15.414715 | orchestrator | Saturday 27 September 2025 21:45:28 +0000 (0:00:02.191) 0:00:40.688 **** 2025-09-27 21:49:15.414723 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:49:15.414730 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:49:15.414737 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:49:15.414745 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:49:15.414811 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:49:15.414839 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:49:15.414846 | orchestrator | 2025-09-27 21:49:15.414852 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-27 21:49:15.414859 | orchestrator | Saturday 27 September 2025 21:45:30 +0000 (0:00:01.919) 0:00:42.607 **** 2025-09-27 21:49:15.414865 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:15.414871 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:49:15.414878 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:49:15.414884 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:15.414890 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:15.414896 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:49:15.414903 | orchestrator | 2025-09-27 21:49:15.414910 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-09-27 21:49:15.414916 | orchestrator | Saturday 27 September 2025 21:45:33 +0000 (0:00:03.209) 0:00:45.817 **** 2025-09-27 21:49:15.414926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 21:49:15.414942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 21:49:15.414950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 21:49:15.414965 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-27 21:49:15.414981 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-27 21:49:15.414989 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-27 21:49:15.414995 | orchestrator | 2025-09-27 21:49:15.415002 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-09-27 21:49:15.415009 | orchestrator | Saturday 27 September 2025 21:45:36 +0000 (0:00:03.644) 0:00:49.461 **** 2025-09-27 21:49:15.415015 | orchestrator | [WARNING]: Skipped 2025-09-27 21:49:15.415022 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-09-27 21:49:15.415029 | orchestrator | due to this access issue: 2025-09-27 21:49:15.415039 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-09-27 21:49:15.415046 | orchestrator | a directory 2025-09-27 21:49:15.415052 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-27 21:49:15.415059 | orchestrator | 2025-09-27 21:49:15.415065 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-27 21:49:15.415071 | orchestrator | Saturday 27 September 2025 21:45:37 +0000 (0:00:00.909) 0:00:50.370 **** 2025-09-27 21:49:15.415079 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:49:15.415086 | orchestrator | 2025-09-27 21:49:15.415093 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-09-27 21:49:15.415100 | orchestrator | Saturday 27 September 2025 21:45:39 +0000 (0:00:01.275) 0:00:51.646 **** 2025-09-27 21:49:15.415107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 21:49:15.415126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 21:49:15.415133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 21:49:15.415141 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-27 21:49:15.415152 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-27 21:49:15.415159 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-27 21:49:15.415171 | orchestrator | 2025-09-27 21:49:15.415177 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-09-27 21:49:15.415183 | orchestrator | Saturday 27 September 2025 21:45:43 +0000 (0:00:04.414) 0:00:56.061 **** 2025-09-27 21:49:15.415195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 21:49:15.415202 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:15.415210 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 21:49:15.415220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 21:49:15.415227 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:49:15.415233 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:15.415239 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 21:49:15.415252 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:49:15.415258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 21:49:15.415264 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:15.415275 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 21:49:15.415282 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:49:15.415288 | orchestrator | 2025-09-27 21:49:15.415295 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-09-27 21:49:15.415301 | orchestrator | Saturday 27 September 2025 21:45:46 +0000 (0:00:03.031) 0:00:59.093 **** 2025-09-27 21:49:15.415308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 21:49:15.415314 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:15.415329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 21:49:15.415341 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:15.415348 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 21:49:15.415355 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:49:15.415362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 21:49:15.415369 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:15.415380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 21:49:15.415387 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:49:15.415393 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 21:49:15.415400 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:49:15.415406 | orchestrator | 2025-09-27 21:49:15.415413 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-09-27 21:49:15.415426 | orchestrator | Saturday 27 September 2025 21:45:50 +0000 (0:00:03.829) 0:01:02.922 **** 2025-09-27 21:49:15.415432 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:15.415442 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:15.415449 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:15.415455 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:49:15.415462 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:49:15.415468 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:49:15.415475 | orchestrator | 2025-09-27 21:49:15.415481 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-09-27 21:49:15.415488 | orchestrator | Saturday 27 September 2025 21:45:53 +0000 (0:00:02.897) 0:01:05.820 **** 2025-09-27 21:49:15.415495 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:15.415501 | orchestrator | 2025-09-27 21:49:15.415508 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-09-27 21:49:15.415515 | orchestrator | Saturday 27 September 2025 21:45:53 +0000 (0:00:00.122) 0:01:05.942 **** 2025-09-27 21:49:15.415521 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:15.415528 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:15.415535 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:15.415542 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:49:15.415548 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:49:15.415555 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:49:15.415561 | orchestrator | 2025-09-27 21:49:15.415567 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-09-27 21:49:15.415573 | orchestrator | Saturday 27 September 2025 21:45:54 +0000 (0:00:00.906) 0:01:06.849 **** 2025-09-27 21:49:15.415580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 21:49:15.415588 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:15.415602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 21:49:15.415609 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:15.415616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 21:49:15.415629 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:15.415640 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 21:49:15.415648 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:49:15.415654 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 21:49:15.415661 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:49:15.415668 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 21:49:15.415674 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:49:15.415681 | orchestrator | 2025-09-27 21:49:15.415686 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-09-27 21:49:15.415693 | orchestrator | Saturday 27 September 2025 21:45:57 +0000 (0:00:03.683) 0:01:10.533 **** 2025-09-27 21:49:15.415705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 21:49:15.415717 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-27 21:49:15.415727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 21:49:15.415734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 21:49:15.415741 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-27 21:49:15.415773 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-27 21:49:15.415786 | orchestrator | 2025-09-27 21:49:15.415793 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-09-27 21:49:15.415799 | orchestrator | Saturday 27 September 2025 21:46:02 +0000 (0:00:04.345) 0:01:14.878 **** 2025-09-27 21:49:15.415806 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-27 21:49:15.415816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 21:49:15.415823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 21:49:15.415834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 21:49:15.415848 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-27 21:49:15.415855 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-27 21:49:15.415862 | orchestrator | 2025-09-27 21:49:15.415869 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-09-27 21:49:15.415875 | orchestrator | Saturday 27 September 2025 21:46:09 +0000 (0:00:06.739) 0:01:21.618 **** 2025-09-27 21:49:15.415886 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 21:49:15.415893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 21:49:15.415900 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:49:15.415907 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:15.415919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 21:49:15.415931 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:15.415938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 21:49:15.415944 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:15.415954 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 21:49:15.415961 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:49:15.415967 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 21:49:15.415974 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:49:15.415980 | orchestrator | 2025-09-27 21:49:15.415987 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-09-27 21:49:15.415993 | orchestrator | Saturday 27 September 2025 21:46:12 +0000 (0:00:03.120) 0:01:24.739 **** 2025-09-27 21:49:15.416000 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:49:15.416006 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:49:15.416012 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:49:15.416018 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:49:15.416025 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:49:15.416031 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:49:15.416038 | orchestrator | 2025-09-27 21:49:15.416044 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-09-27 21:49:15.416050 | orchestrator | Saturday 27 September 2025 21:46:15 +0000 (0:00:02.981) 0:01:27.720 **** 2025-09-27 21:49:15.416062 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 21:49:15.416070 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:49:15.416082 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 21:49:15.416089 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:49:15.416096 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 21:49:15.416103 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:49:15.416116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 21:49:15.416124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 21:49:15.416141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 21:49:15.416148 | orchestrator | 2025-09-27 21:49:15.416155 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-09-27 21:49:15.416161 | orchestrator | Saturday 27 September 2025 21:46:19 +0000 (0:00:03.848) 0:01:31.569 **** 2025-09-27 21:49:15.416168 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:15.416174 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:15.416180 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:15.416187 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:49:15.416194 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:49:15.416200 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:49:15.416206 | orchestrator | 2025-09-27 21:49:15.416213 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-09-27 21:49:15.416219 | orchestrator | Saturday 27 September 2025 21:46:21 +0000 (0:00:02.848) 0:01:34.417 **** 2025-09-27 21:49:15.416226 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:15.416233 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:15.416240 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:15.416246 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:49:15.416253 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:49:15.416259 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:49:15.416264 | orchestrator | 2025-09-27 21:49:15.416270 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-09-27 21:49:15.416276 | orchestrator | Saturday 27 September 2025 21:46:24 +0000 (0:00:02.785) 0:01:37.203 **** 2025-09-27 21:49:15.416282 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:15.416288 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:15.416295 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:15.416301 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:49:15.416307 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:49:15.416313 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:49:15.416320 | orchestrator | 2025-09-27 21:49:15.416326 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-09-27 21:49:15.416333 | orchestrator | Saturday 27 September 2025 21:46:27 +0000 (0:00:02.516) 0:01:39.719 **** 2025-09-27 21:49:15.416339 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:15.416346 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:15.416353 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:15.416359 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:49:15.416366 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:49:15.416373 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:49:15.416379 | orchestrator | 2025-09-27 21:49:15.416390 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-09-27 21:49:15.416397 | orchestrator | Saturday 27 September 2025 21:46:30 +0000 (0:00:02.864) 0:01:42.584 **** 2025-09-27 21:49:15.416404 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:15.416416 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:15.416422 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:15.416428 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:49:15.416434 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:49:15.416440 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:49:15.416448 | orchestrator | 2025-09-27 21:49:15.416456 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-09-27 21:49:15.416463 | orchestrator | Saturday 27 September 2025 21:46:31 +0000 (0:00:01.860) 0:01:44.445 **** 2025-09-27 21:49:15.416470 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:15.416478 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:15.416485 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:49:15.416492 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:15.416499 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:49:15.416507 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:49:15.416512 | orchestrator | 2025-09-27 21:49:15.416519 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-09-27 21:49:15.416525 | orchestrator | Saturday 27 September 2025 21:46:34 +0000 (0:00:02.119) 0:01:46.564 **** 2025-09-27 21:49:15.416532 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-27 21:49:15.416538 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:15.416545 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-27 21:49:15.416551 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:49:15.416557 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-27 21:49:15.416564 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:15.416570 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-27 21:49:15.416577 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:49:15.416583 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-27 21:49:15.416589 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:49:15.416596 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-27 21:49:15.416602 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:15.416608 | orchestrator | 2025-09-27 21:49:15.416614 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-09-27 21:49:15.416620 | orchestrator | Saturday 27 September 2025 21:46:36 +0000 (0:00:02.669) 0:01:49.234 **** 2025-09-27 21:49:15.416638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 21:49:15.416645 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:15.416652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 21:49:15.416665 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:15.416703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 21:49:15.416712 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:15.416718 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 21:49:15.416725 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:49:15.416732 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 21:49:15.416738 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:49:15.416772 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 21:49:15.416780 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:49:15.416793 | orchestrator | 2025-09-27 21:49:15.416800 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-09-27 21:49:15.416807 | orchestrator | Saturday 27 September 2025 21:46:38 +0000 (0:00:02.050) 0:01:51.285 **** 2025-09-27 21:49:15.416814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 21:49:15.416821 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:15.416831 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 21:49:15.416838 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:49:15.416844 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 21:49:15.416851 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:49:15.416861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 21:49:15.416868 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:15.416875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 21:49:15.416887 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:15.416897 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 21:49:15.416903 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:49:15.416910 | orchestrator | 2025-09-27 21:49:15.416917 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-09-27 21:49:15.416923 | orchestrator | Saturday 27 September 2025 21:46:40 +0000 (0:00:01.931) 0:01:53.216 **** 2025-09-27 21:49:15.416930 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:15.416937 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:15.416944 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:49:15.416951 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:15.416958 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:49:15.416965 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:49:15.416971 | orchestrator | 2025-09-27 21:49:15.416979 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-09-27 21:49:15.416986 | orchestrator | Saturday 27 September 2025 21:46:42 +0000 (0:00:02.227) 0:01:55.443 **** 2025-09-27 21:49:15.416993 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:15.417001 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:15.417008 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:15.417015 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:49:15.417022 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:49:15.417030 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:49:15.417037 | orchestrator | 2025-09-27 21:49:15.417044 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-09-27 21:49:15.417051 | orchestrator | Saturday 27 September 2025 21:46:46 +0000 (0:00:03.396) 0:01:58.840 **** 2025-09-27 21:49:15.417058 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:15.417065 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:15.417072 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:15.417079 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:49:15.417086 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:49:15.417093 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:49:15.417100 | orchestrator | 2025-09-27 21:49:15.417107 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-09-27 21:49:15.417114 | orchestrator | Saturday 27 September 2025 21:46:48 +0000 (0:00:02.187) 0:02:01.028 **** 2025-09-27 21:49:15.417121 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:15.417128 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:15.417135 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:49:15.417142 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:15.417153 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:49:15.417160 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:49:15.417167 | orchestrator | 2025-09-27 21:49:15.417174 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-09-27 21:49:15.417181 | orchestrator | Saturday 27 September 2025 21:46:51 +0000 (0:00:02.809) 0:02:03.838 **** 2025-09-27 21:49:15.417187 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:15.417193 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:15.417200 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:15.417206 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:49:15.417213 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:49:15.417220 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:49:15.417227 | orchestrator | 2025-09-27 21:49:15.417233 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-09-27 21:49:15.417240 | orchestrator | Saturday 27 September 2025 21:46:54 +0000 (0:00:02.772) 0:02:06.610 **** 2025-09-27 21:49:15.417247 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:15.417254 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:15.417260 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:15.417266 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:49:15.417273 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:49:15.417383 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:49:15.417396 | orchestrator | 2025-09-27 21:49:15.417403 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-09-27 21:49:15.417410 | orchestrator | Saturday 27 September 2025 21:46:55 +0000 (0:00:01.837) 0:02:08.447 **** 2025-09-27 21:49:15.417416 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:15.417423 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:15.417429 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:49:15.417435 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:15.417442 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:49:15.417448 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:49:15.417454 | orchestrator | 2025-09-27 21:49:15.417461 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-09-27 21:49:15.417467 | orchestrator | Saturday 27 September 2025 21:46:57 +0000 (0:00:01.878) 0:02:10.326 **** 2025-09-27 21:49:15.417473 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:15.417480 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:15.417486 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:15.417492 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:49:15.417499 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:49:15.417505 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:49:15.417512 | orchestrator | 2025-09-27 21:49:15.417518 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-09-27 21:49:15.417526 | orchestrator | Saturday 27 September 2025 21:46:59 +0000 (0:00:02.027) 0:02:12.353 **** 2025-09-27 21:49:15.417533 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:15.417544 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:15.417551 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:15.417557 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:49:15.417563 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:49:15.417569 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:49:15.417575 | orchestrator | 2025-09-27 21:49:15.417581 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-09-27 21:49:15.417587 | orchestrator | Saturday 27 September 2025 21:47:01 +0000 (0:00:02.023) 0:02:14.377 **** 2025-09-27 21:49:15.417593 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-27 21:49:15.417600 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:15.417606 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-27 21:49:15.417612 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:15.417625 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-27 21:49:15.417638 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:15.417645 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-27 21:49:15.417651 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:49:15.417658 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-27 21:49:15.417664 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:49:15.417671 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-27 21:49:15.417677 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:49:15.417684 | orchestrator | 2025-09-27 21:49:15.417690 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-09-27 21:49:15.417697 | orchestrator | Saturday 27 September 2025 21:47:04 +0000 (0:00:02.775) 0:02:17.152 **** 2025-09-27 21:49:15.417704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 21:49:15.417711 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:15.417725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 21:49:15.417732 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:15.417738 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 21:49:15.417745 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:49:15.417806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-27 21:49:15.417823 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:15.417829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 21:49:15.417836 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:49:15.417843 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-27 21:49:15.417849 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:49:15.417856 | orchestrator | 2025-09-27 21:49:15.417862 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-09-27 21:49:15.417869 | orchestrator | Saturday 27 September 2025 21:47:07 +0000 (0:00:02.420) 0:02:19.572 **** 2025-09-27 21:49:15.417883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 21:49:15.417890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 21:49:15.417910 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-27 21:49:15.417918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-27 21:49:15.417926 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-27 21:49:15.417938 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-27 21:49:15.417946 | orchestrator | 2025-09-27 21:49:15.417953 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-27 21:49:15.417960 | orchestrator | Saturday 27 September 2025 21:47:09 +0000 (0:00:02.271) 0:02:21.844 **** 2025-09-27 21:49:15.417966 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:15.417971 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:15.417977 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:15.417992 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:49:15.417998 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:49:15.418004 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:49:15.418011 | orchestrator | 2025-09-27 21:49:15.418046 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-09-27 21:49:15.418053 | orchestrator | Saturday 27 September 2025 21:47:09 +0000 (0:00:00.477) 0:02:22.321 **** 2025-09-27 21:49:15.418061 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:49:15.418069 | orchestrator | 2025-09-27 21:49:15.418076 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-09-27 21:49:15.418083 | orchestrator | Saturday 27 September 2025 21:47:11 +0000 (0:00:02.147) 0:02:24.469 **** 2025-09-27 21:49:15.418091 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:49:15.418098 | orchestrator | 2025-09-27 21:49:15.418105 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-09-27 21:49:15.418114 | orchestrator | Saturday 27 September 2025 21:47:14 +0000 (0:00:02.486) 0:02:26.955 **** 2025-09-27 21:49:15.418122 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:49:15.418129 | orchestrator | 2025-09-27 21:49:15.418139 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-27 21:49:15.418147 | orchestrator | Saturday 27 September 2025 21:47:56 +0000 (0:00:41.829) 0:03:08.784 **** 2025-09-27 21:49:15.418155 | orchestrator | 2025-09-27 21:49:15.418161 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-27 21:49:15.418168 | orchestrator | Saturday 27 September 2025 21:47:56 +0000 (0:00:00.068) 0:03:08.853 **** 2025-09-27 21:49:15.418175 | orchestrator | 2025-09-27 21:49:15.418187 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-27 21:49:15.418195 | orchestrator | Saturday 27 September 2025 21:47:56 +0000 (0:00:00.250) 0:03:09.103 **** 2025-09-27 21:49:15.418203 | orchestrator | 2025-09-27 21:49:15.418210 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-27 21:49:15.418218 | orchestrator | Saturday 27 September 2025 21:47:56 +0000 (0:00:00.062) 0:03:09.166 **** 2025-09-27 21:49:15.418225 | orchestrator | 2025-09-27 21:49:15.418232 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-27 21:49:15.418239 | orchestrator | Saturday 27 September 2025 21:47:56 +0000 (0:00:00.065) 0:03:09.232 **** 2025-09-27 21:49:15.418247 | orchestrator | 2025-09-27 21:49:15.418255 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-27 21:49:15.418261 | orchestrator | Saturday 27 September 2025 21:47:56 +0000 (0:00:00.063) 0:03:09.295 **** 2025-09-27 21:49:15.418267 | orchestrator | 2025-09-27 21:49:15.418273 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-09-27 21:49:15.418280 | orchestrator | Saturday 27 September 2025 21:47:56 +0000 (0:00:00.064) 0:03:09.360 **** 2025-09-27 21:49:15.418287 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:49:15.418294 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:49:15.418301 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:49:15.418308 | orchestrator | 2025-09-27 21:49:15.418315 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-09-27 21:49:15.418324 | orchestrator | Saturday 27 September 2025 21:48:19 +0000 (0:00:22.986) 0:03:32.346 **** 2025-09-27 21:49:15.418331 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:49:15.418339 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:49:15.418347 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:49:15.418354 | orchestrator | 2025-09-27 21:49:15.418363 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:49:15.418371 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-27 21:49:15.418379 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-27 21:49:15.418386 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-27 21:49:15.418403 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-27 21:49:15.418410 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-27 21:49:15.418418 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-27 21:49:15.418424 | orchestrator | 2025-09-27 21:49:15.418431 | orchestrator | 2025-09-27 21:49:15.418438 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:49:15.418445 | orchestrator | Saturday 27 September 2025 21:49:13 +0000 (0:00:53.251) 0:04:25.598 **** 2025-09-27 21:49:15.418459 | orchestrator | =============================================================================== 2025-09-27 21:49:15.418467 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 53.25s 2025-09-27 21:49:15.418474 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 41.83s 2025-09-27 21:49:15.418481 | orchestrator | neutron : Restart neutron-server container ----------------------------- 22.99s 2025-09-27 21:49:15.418488 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.44s 2025-09-27 21:49:15.418495 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.47s 2025-09-27 21:49:15.418502 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.74s 2025-09-27 21:49:15.418508 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.41s 2025-09-27 21:49:15.418515 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.35s 2025-09-27 21:49:15.418523 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.08s 2025-09-27 21:49:15.418529 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.85s 2025-09-27 21:49:15.418537 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.83s 2025-09-27 21:49:15.418544 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.71s 2025-09-27 21:49:15.418551 | orchestrator | neutron : Copying over existing policy file ----------------------------- 3.68s 2025-09-27 21:49:15.418559 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.64s 2025-09-27 21:49:15.418566 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.57s 2025-09-27 21:49:15.418573 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.48s 2025-09-27 21:49:15.418580 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.40s 2025-09-27 21:49:15.418588 | orchestrator | Setting sysctl values --------------------------------------------------- 3.21s 2025-09-27 21:49:15.418595 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.12s 2025-09-27 21:49:15.418603 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.03s 2025-09-27 21:49:15.418615 | orchestrator | 2025-09-27 21:49:15 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:49:15.420740 | orchestrator | 2025-09-27 21:49:15 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:49:15.420807 | orchestrator | 2025-09-27 21:49:15 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:49:18.459353 | orchestrator | 2025-09-27 21:49:18 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state STARTED 2025-09-27 21:49:18.460237 | orchestrator | 2025-09-27 21:49:18 | INFO  | Task 85b02bc4-7dc9-40c6-ab4a-b22ab2bb6ad5 is in state STARTED 2025-09-27 21:49:18.462604 | orchestrator | 2025-09-27 21:49:18 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:49:18.464253 | orchestrator | 2025-09-27 21:49:18 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:49:18.464627 | orchestrator | 2025-09-27 21:49:18 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:49:21.495737 | orchestrator | 2025-09-27 21:49:21 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state STARTED 2025-09-27 21:49:21.495892 | orchestrator | 2025-09-27 21:49:21 | INFO  | Task 85b02bc4-7dc9-40c6-ab4a-b22ab2bb6ad5 is in state STARTED 2025-09-27 21:49:21.496697 | orchestrator | 2025-09-27 21:49:21 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:49:21.497735 | orchestrator | 2025-09-27 21:49:21 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:49:21.497785 | orchestrator | 2025-09-27 21:49:21 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:49:24.522455 | orchestrator | 2025-09-27 21:49:24 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state STARTED 2025-09-27 21:49:24.522923 | orchestrator | 2025-09-27 21:49:24 | INFO  | Task 85b02bc4-7dc9-40c6-ab4a-b22ab2bb6ad5 is in state STARTED 2025-09-27 21:49:24.523797 | orchestrator | 2025-09-27 21:49:24 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:49:24.525007 | orchestrator | 2025-09-27 21:49:24 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:49:24.525081 | orchestrator | 2025-09-27 21:49:24 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:49:27.562675 | orchestrator | 2025-09-27 21:49:27 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state STARTED 2025-09-27 21:49:27.564689 | orchestrator | 2025-09-27 21:49:27 | INFO  | Task 85b02bc4-7dc9-40c6-ab4a-b22ab2bb6ad5 is in state STARTED 2025-09-27 21:49:27.567069 | orchestrator | 2025-09-27 21:49:27 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:49:27.568303 | orchestrator | 2025-09-27 21:49:27 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:49:27.568856 | orchestrator | 2025-09-27 21:49:27 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:49:30.602556 | orchestrator | 2025-09-27 21:49:30 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state STARTED 2025-09-27 21:49:30.603360 | orchestrator | 2025-09-27 21:49:30 | INFO  | Task 85b02bc4-7dc9-40c6-ab4a-b22ab2bb6ad5 is in state STARTED 2025-09-27 21:49:30.604182 | orchestrator | 2025-09-27 21:49:30 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:49:30.605251 | orchestrator | 2025-09-27 21:49:30 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:49:30.605337 | orchestrator | 2025-09-27 21:49:30 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:49:33.656270 | orchestrator | 2025-09-27 21:49:33 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state STARTED 2025-09-27 21:49:33.657502 | orchestrator | 2025-09-27 21:49:33 | INFO  | Task 85b02bc4-7dc9-40c6-ab4a-b22ab2bb6ad5 is in state STARTED 2025-09-27 21:49:33.659097 | orchestrator | 2025-09-27 21:49:33 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:49:33.660697 | orchestrator | 2025-09-27 21:49:33 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:49:33.660779 | orchestrator | 2025-09-27 21:49:33 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:49:36.706552 | orchestrator | 2025-09-27 21:49:36 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state STARTED 2025-09-27 21:49:36.707647 | orchestrator | 2025-09-27 21:49:36 | INFO  | Task 85b02bc4-7dc9-40c6-ab4a-b22ab2bb6ad5 is in state STARTED 2025-09-27 21:49:36.709487 | orchestrator | 2025-09-27 21:49:36 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:49:36.710577 | orchestrator | 2025-09-27 21:49:36 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:49:36.710658 | orchestrator | 2025-09-27 21:49:36 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:49:39.754305 | orchestrator | 2025-09-27 21:49:39 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state STARTED 2025-09-27 21:49:39.757539 | orchestrator | 2025-09-27 21:49:39 | INFO  | Task 85b02bc4-7dc9-40c6-ab4a-b22ab2bb6ad5 is in state STARTED 2025-09-27 21:49:39.761614 | orchestrator | 2025-09-27 21:49:39 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:49:39.765391 | orchestrator | 2025-09-27 21:49:39 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:49:39.765491 | orchestrator | 2025-09-27 21:49:39 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:49:42.814711 | orchestrator | 2025-09-27 21:49:42 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state STARTED 2025-09-27 21:49:42.816885 | orchestrator | 2025-09-27 21:49:42 | INFO  | Task 85b02bc4-7dc9-40c6-ab4a-b22ab2bb6ad5 is in state STARTED 2025-09-27 21:49:42.818160 | orchestrator | 2025-09-27 21:49:42 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:49:42.819936 | orchestrator | 2025-09-27 21:49:42 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:49:42.819977 | orchestrator | 2025-09-27 21:49:42 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:49:45.864044 | orchestrator | 2025-09-27 21:49:45 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state STARTED 2025-09-27 21:49:45.865770 | orchestrator | 2025-09-27 21:49:45 | INFO  | Task 85b02bc4-7dc9-40c6-ab4a-b22ab2bb6ad5 is in state STARTED 2025-09-27 21:49:45.870125 | orchestrator | 2025-09-27 21:49:45 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:49:45.873199 | orchestrator | 2025-09-27 21:49:45 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:49:45.873252 | orchestrator | 2025-09-27 21:49:45 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:49:48.917829 | orchestrator | 2025-09-27 21:49:48 | INFO  | Task d86a5ecd-a397-4235-a00d-8c74b8216240 is in state STARTED 2025-09-27 21:49:48.920547 | orchestrator | 2025-09-27 21:49:48 | INFO  | Task ac52b3a3-0fda-4bfd-963a-919268cc0865 is in state SUCCESS 2025-09-27 21:49:48.922665 | orchestrator | 2025-09-27 21:49:48.922880 | orchestrator | 2025-09-27 21:49:48.922899 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 21:49:48.922913 | orchestrator | 2025-09-27 21:49:48.922925 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 21:49:48.922936 | orchestrator | Saturday 27 September 2025 21:47:57 +0000 (0:00:00.258) 0:00:00.258 **** 2025-09-27 21:49:48.922948 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:49:48.922960 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:49:48.922971 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:49:48.922981 | orchestrator | 2025-09-27 21:49:48.922993 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 21:49:48.923003 | orchestrator | Saturday 27 September 2025 21:47:57 +0000 (0:00:00.329) 0:00:00.588 **** 2025-09-27 21:49:48.923014 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-09-27 21:49:48.923026 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-09-27 21:49:48.923065 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-09-27 21:49:48.923076 | orchestrator | 2025-09-27 21:49:48.923087 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-09-27 21:49:48.923098 | orchestrator | 2025-09-27 21:49:48.923109 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-27 21:49:48.923120 | orchestrator | Saturday 27 September 2025 21:47:58 +0000 (0:00:00.512) 0:00:01.100 **** 2025-09-27 21:49:48.923131 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:49:48.923143 | orchestrator | 2025-09-27 21:49:48.923154 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-09-27 21:49:48.923165 | orchestrator | Saturday 27 September 2025 21:47:58 +0000 (0:00:00.653) 0:00:01.754 **** 2025-09-27 21:49:48.923178 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-09-27 21:49:48.923190 | orchestrator | 2025-09-27 21:49:48.923204 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-09-27 21:49:48.923223 | orchestrator | Saturday 27 September 2025 21:48:02 +0000 (0:00:03.665) 0:00:05.419 **** 2025-09-27 21:49:48.923241 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-09-27 21:49:48.923261 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-09-27 21:49:48.923281 | orchestrator | 2025-09-27 21:49:48.923301 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-09-27 21:49:48.923321 | orchestrator | Saturday 27 September 2025 21:48:09 +0000 (0:00:06.913) 0:00:12.333 **** 2025-09-27 21:49:48.923342 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-27 21:49:48.923362 | orchestrator | 2025-09-27 21:49:48.923389 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-09-27 21:49:48.923401 | orchestrator | Saturday 27 September 2025 21:48:12 +0000 (0:00:03.127) 0:00:15.461 **** 2025-09-27 21:49:48.923412 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-27 21:49:48.923423 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-09-27 21:49:48.923434 | orchestrator | 2025-09-27 21:49:48.923445 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-09-27 21:49:48.923455 | orchestrator | Saturday 27 September 2025 21:48:16 +0000 (0:00:03.881) 0:00:19.342 **** 2025-09-27 21:49:48.923466 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-27 21:49:48.923477 | orchestrator | 2025-09-27 21:49:48.923488 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-09-27 21:49:48.923499 | orchestrator | Saturday 27 September 2025 21:48:20 +0000 (0:00:03.932) 0:00:23.275 **** 2025-09-27 21:49:48.923510 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-09-27 21:49:48.923521 | orchestrator | 2025-09-27 21:49:48.923531 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-09-27 21:49:48.923543 | orchestrator | Saturday 27 September 2025 21:48:25 +0000 (0:00:04.851) 0:00:28.127 **** 2025-09-27 21:49:48.923554 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:49:48.923565 | orchestrator | 2025-09-27 21:49:48.923576 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-09-27 21:49:48.923586 | orchestrator | Saturday 27 September 2025 21:48:28 +0000 (0:00:03.704) 0:00:31.831 **** 2025-09-27 21:49:48.923597 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:49:48.923608 | orchestrator | 2025-09-27 21:49:48.923618 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-09-27 21:49:48.923629 | orchestrator | Saturday 27 September 2025 21:48:33 +0000 (0:00:04.307) 0:00:36.139 **** 2025-09-27 21:49:48.923640 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:49:48.923651 | orchestrator | 2025-09-27 21:49:48.923662 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-09-27 21:49:48.923672 | orchestrator | Saturday 27 September 2025 21:48:37 +0000 (0:00:03.876) 0:00:40.015 **** 2025-09-27 21:49:48.923715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 21:49:48.923756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 21:49:48.923776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 21:49:48.923789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 21:49:48.923801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 21:49:48.923827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 21:49:48.923839 | orchestrator | 2025-09-27 21:49:48.923850 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-09-27 21:49:48.923862 | orchestrator | Saturday 27 September 2025 21:48:39 +0000 (0:00:02.306) 0:00:42.322 **** 2025-09-27 21:49:48.923873 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:48.923884 | orchestrator | 2025-09-27 21:49:48.923895 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-09-27 21:49:48.923905 | orchestrator | Saturday 27 September 2025 21:48:39 +0000 (0:00:00.227) 0:00:42.550 **** 2025-09-27 21:49:48.923916 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:48.923927 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:48.923938 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:48.923949 | orchestrator | 2025-09-27 21:49:48.923960 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-09-27 21:49:48.923971 | orchestrator | Saturday 27 September 2025 21:48:40 +0000 (0:00:00.609) 0:00:43.159 **** 2025-09-27 21:49:48.923982 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-27 21:49:48.923993 | orchestrator | 2025-09-27 21:49:48.924004 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-09-27 21:49:48.924015 | orchestrator | Saturday 27 September 2025 21:48:41 +0000 (0:00:01.784) 0:00:44.943 **** 2025-09-27 21:49:48.924027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 21:49:48.924044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 21:49:48.924063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 21:49:48.924084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 21:49:48.924096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 21:49:48.924107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 21:49:48.924118 | orchestrator | 2025-09-27 21:49:48.924130 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-09-27 21:49:48.924141 | orchestrator | Saturday 27 September 2025 21:48:44 +0000 (0:00:02.991) 0:00:47.934 **** 2025-09-27 21:49:48.924158 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:49:48.924169 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:49:48.924180 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:49:48.924191 | orchestrator | 2025-09-27 21:49:48.924202 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-27 21:49:48.924213 | orchestrator | Saturday 27 September 2025 21:48:45 +0000 (0:00:00.259) 0:00:48.194 **** 2025-09-27 21:49:48.924224 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:49:48.924241 | orchestrator | 2025-09-27 21:49:48.924251 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-09-27 21:49:48.924263 | orchestrator | Saturday 27 September 2025 21:48:45 +0000 (0:00:00.497) 0:00:48.692 **** 2025-09-27 21:49:48.924284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 21:49:48.924319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 21:49:48.924341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 21:49:48.924362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 21:49:48.924388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 21:49:48.924420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 21:49:48.924433 | orchestrator | 2025-09-27 21:49:48.924444 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-09-27 21:49:48.924455 | orchestrator | Saturday 27 September 2025 21:48:47 +0000 (0:00:02.219) 0:00:50.911 **** 2025-09-27 21:49:48.924475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-27 21:49:48.924487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-27 21:49:48.924498 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:48.924510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-27 21:49:48.924534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-27 21:49:48.924591 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:48.924603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-27 21:49:48.924623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-27 21:49:48.924635 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:48.924646 | orchestrator | 2025-09-27 21:49:48.924657 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-09-27 21:49:48.924668 | orchestrator | Saturday 27 September 2025 21:48:48 +0000 (0:00:00.445) 0:00:51.357 **** 2025-09-27 21:49:48.924679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-27 21:49:48.924696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-27 21:49:48.924743 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:48.924755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-27 21:49:48.924767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-27 21:49:48.924778 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:48.924799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-27 21:49:48.924811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-27 21:49:48.924829 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:48.924840 | orchestrator | 2025-09-27 21:49:48.924851 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-09-27 21:49:48.924862 | orchestrator | Saturday 27 September 2025 21:48:49 +0000 (0:00:00.792) 0:00:52.150 **** 2025-09-27 21:49:48.924878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 21:49:48.924891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 21:49:48.925131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 21:49:48.925149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 21:49:48.925161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 21:49:48.925186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 21:49:48.925198 | orchestrator | 2025-09-27 21:49:48.925209 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-09-27 21:49:48.925220 | orchestrator | Saturday 27 September 2025 21:48:51 +0000 (0:00:02.354) 0:00:54.505 **** 2025-09-27 21:49:48.925231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 21:49:48.925250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 21:49:48.925262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 21:49:48.925281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 21:49:48.925297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 21:49:48.925309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 21:49:48.925320 | orchestrator | 2025-09-27 21:49:48.925332 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-09-27 21:49:48.925343 | orchestrator | Saturday 27 September 2025 21:48:56 +0000 (0:00:04.908) 0:00:59.413 **** 2025-09-27 21:49:48.925360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-27 21:49:48.925372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-27 21:49:48.925389 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:48.925401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-27 21:49:48.925418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-27 21:49:48.925429 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:48.925441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-27 21:49:48.925457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-27 21:49:48.925469 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:48.925480 | orchestrator | 2025-09-27 21:49:48.925491 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-09-27 21:49:48.925510 | orchestrator | Saturday 27 September 2025 21:48:57 +0000 (0:00:00.582) 0:00:59.996 **** 2025-09-27 21:49:48.925521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 21:49:48.925538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 21:49:48.925550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-27 21:49:48.925561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 21:49:48.925579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 21:49:48.925609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 21:49:48.925627 | orchestrator | 2025-09-27 21:49:48.925647 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-27 21:49:48.925666 | orchestrator | Saturday 27 September 2025 21:49:00 +0000 (0:00:03.613) 0:01:03.609 **** 2025-09-27 21:49:48.925685 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:49:48.925704 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:49:48.925747 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:49:48.925761 | orchestrator | 2025-09-27 21:49:48.925773 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-09-27 21:49:48.925785 | orchestrator | Saturday 27 September 2025 21:49:01 +0000 (0:00:00.459) 0:01:04.069 **** 2025-09-27 21:49:48.925797 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:49:48.925809 | orchestrator | 2025-09-27 21:49:48.925821 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-09-27 21:49:48.925833 | orchestrator | Saturday 27 September 2025 21:49:02 +0000 (0:00:01.906) 0:01:05.975 **** 2025-09-27 21:49:48.925845 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:49:48.925857 | orchestrator | 2025-09-27 21:49:48.925875 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-09-27 21:49:48.925888 | orchestrator | Saturday 27 September 2025 21:49:05 +0000 (0:00:02.048) 0:01:08.024 **** 2025-09-27 21:49:48.925900 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:49:48.925912 | orchestrator | 2025-09-27 21:49:48.925924 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-27 21:49:48.925936 | orchestrator | Saturday 27 September 2025 21:49:20 +0000 (0:00:15.937) 0:01:23.961 **** 2025-09-27 21:49:48.925947 | orchestrator | 2025-09-27 21:49:48.925958 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-27 21:49:48.925969 | orchestrator | Saturday 27 September 2025 21:49:21 +0000 (0:00:00.063) 0:01:24.024 **** 2025-09-27 21:49:48.925979 | orchestrator | 2025-09-27 21:49:48.925990 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-27 21:49:48.926001 | orchestrator | Saturday 27 September 2025 21:49:21 +0000 (0:00:00.064) 0:01:24.089 **** 2025-09-27 21:49:48.926012 | orchestrator | 2025-09-27 21:49:48.926085 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-09-27 21:49:48.926097 | orchestrator | Saturday 27 September 2025 21:49:21 +0000 (0:00:00.067) 0:01:24.157 **** 2025-09-27 21:49:48.926108 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:49:48.926119 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:49:48.926130 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:49:48.926140 | orchestrator | 2025-09-27 21:49:48.926152 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-09-27 21:49:48.926162 | orchestrator | Saturday 27 September 2025 21:49:36 +0000 (0:00:14.868) 0:01:39.026 **** 2025-09-27 21:49:48.926173 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:49:48.926184 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:49:48.926204 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:49:48.926214 | orchestrator | 2025-09-27 21:49:48.926225 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:49:48.926236 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-27 21:49:48.926249 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-27 21:49:48.926260 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-27 21:49:48.926271 | orchestrator | 2025-09-27 21:49:48.926282 | orchestrator | 2025-09-27 21:49:48.926293 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:49:48.926461 | orchestrator | Saturday 27 September 2025 21:49:46 +0000 (0:00:10.570) 0:01:49.597 **** 2025-09-27 21:49:48.926475 | orchestrator | =============================================================================== 2025-09-27 21:49:48.926486 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.94s 2025-09-27 21:49:48.926508 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 14.87s 2025-09-27 21:49:48.926519 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 10.57s 2025-09-27 21:49:48.926530 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.91s 2025-09-27 21:49:48.926541 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 4.91s 2025-09-27 21:49:48.926552 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.85s 2025-09-27 21:49:48.926563 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.31s 2025-09-27 21:49:48.926576 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.93s 2025-09-27 21:49:48.926595 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.88s 2025-09-27 21:49:48.926614 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.88s 2025-09-27 21:49:48.926635 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.70s 2025-09-27 21:49:48.926656 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.67s 2025-09-27 21:49:48.926674 | orchestrator | magnum : Check magnum containers ---------------------------------------- 3.61s 2025-09-27 21:49:48.926693 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.13s 2025-09-27 21:49:48.926704 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.99s 2025-09-27 21:49:48.926715 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.35s 2025-09-27 21:49:48.926800 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 2.31s 2025-09-27 21:49:48.926811 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.22s 2025-09-27 21:49:48.926823 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.05s 2025-09-27 21:49:48.926834 | orchestrator | magnum : Creating Magnum database --------------------------------------- 1.91s 2025-09-27 21:49:48.926845 | orchestrator | 2025-09-27 21:49:48 | INFO  | Task 85b02bc4-7dc9-40c6-ab4a-b22ab2bb6ad5 is in state STARTED 2025-09-27 21:49:48.926857 | orchestrator | 2025-09-27 21:49:48 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:49:48.928838 | orchestrator | 2025-09-27 21:49:48 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:49:48.928902 | orchestrator | 2025-09-27 21:49:48 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:49:51.974139 | orchestrator | 2025-09-27 21:49:51 | INFO  | Task d86a5ecd-a397-4235-a00d-8c74b8216240 is in state STARTED 2025-09-27 21:49:51.974265 | orchestrator | 2025-09-27 21:49:51 | INFO  | Task 85b02bc4-7dc9-40c6-ab4a-b22ab2bb6ad5 is in state STARTED 2025-09-27 21:49:51.975842 | orchestrator | 2025-09-27 21:49:51 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:49:51.976665 | orchestrator | 2025-09-27 21:49:51 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:49:51.976709 | orchestrator | 2025-09-27 21:49:51 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:49:55.037327 | orchestrator | 2025-09-27 21:49:55 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:49:55.039453 | orchestrator | 2025-09-27 21:49:55 | INFO  | Task d86a5ecd-a397-4235-a00d-8c74b8216240 is in state STARTED 2025-09-27 21:49:55.041289 | orchestrator | 2025-09-27 21:49:55 | INFO  | Task 85b02bc4-7dc9-40c6-ab4a-b22ab2bb6ad5 is in state SUCCESS 2025-09-27 21:49:55.045199 | orchestrator | 2025-09-27 21:49:55 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:49:55.047459 | orchestrator | 2025-09-27 21:49:55 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:49:55.047529 | orchestrator | 2025-09-27 21:49:55 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:49:58.104122 | orchestrator | 2025-09-27 21:49:58 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:49:58.104351 | orchestrator | 2025-09-27 21:49:58 | INFO  | Task d86a5ecd-a397-4235-a00d-8c74b8216240 is in state STARTED 2025-09-27 21:49:58.105168 | orchestrator | 2025-09-27 21:49:58 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:49:58.105979 | orchestrator | 2025-09-27 21:49:58 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:49:58.106008 | orchestrator | 2025-09-27 21:49:58 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:50:01.150794 | orchestrator | 2025-09-27 21:50:01 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:50:01.153811 | orchestrator | 2025-09-27 21:50:01 | INFO  | Task d86a5ecd-a397-4235-a00d-8c74b8216240 is in state STARTED 2025-09-27 21:50:01.155831 | orchestrator | 2025-09-27 21:50:01 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:50:01.157678 | orchestrator | 2025-09-27 21:50:01 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:50:01.157703 | orchestrator | 2025-09-27 21:50:01 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:50:04.209872 | orchestrator | 2025-09-27 21:50:04 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:50:04.211176 | orchestrator | 2025-09-27 21:50:04 | INFO  | Task d86a5ecd-a397-4235-a00d-8c74b8216240 is in state STARTED 2025-09-27 21:50:04.213054 | orchestrator | 2025-09-27 21:50:04 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:50:04.214007 | orchestrator | 2025-09-27 21:50:04 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:50:04.214108 | orchestrator | 2025-09-27 21:50:04 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:50:07.260455 | orchestrator | 2025-09-27 21:50:07 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:50:07.262293 | orchestrator | 2025-09-27 21:50:07 | INFO  | Task d86a5ecd-a397-4235-a00d-8c74b8216240 is in state STARTED 2025-09-27 21:50:07.266693 | orchestrator | 2025-09-27 21:50:07 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:50:07.268364 | orchestrator | 2025-09-27 21:50:07 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:50:07.268762 | orchestrator | 2025-09-27 21:50:07 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:50:10.311859 | orchestrator | 2025-09-27 21:50:10 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:50:10.313194 | orchestrator | 2025-09-27 21:50:10 | INFO  | Task d86a5ecd-a397-4235-a00d-8c74b8216240 is in state STARTED 2025-09-27 21:50:10.315982 | orchestrator | 2025-09-27 21:50:10 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:50:10.317389 | orchestrator | 2025-09-27 21:50:10 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:50:10.317437 | orchestrator | 2025-09-27 21:50:10 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:50:13.353840 | orchestrator | 2025-09-27 21:50:13 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:50:13.355745 | orchestrator | 2025-09-27 21:50:13 | INFO  | Task d86a5ecd-a397-4235-a00d-8c74b8216240 is in state STARTED 2025-09-27 21:50:13.357487 | orchestrator | 2025-09-27 21:50:13 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:50:13.359856 | orchestrator | 2025-09-27 21:50:13 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:50:13.359889 | orchestrator | 2025-09-27 21:50:13 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:50:16.400307 | orchestrator | 2025-09-27 21:50:16 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:50:16.402459 | orchestrator | 2025-09-27 21:50:16 | INFO  | Task d86a5ecd-a397-4235-a00d-8c74b8216240 is in state STARTED 2025-09-27 21:50:16.404362 | orchestrator | 2025-09-27 21:50:16 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:50:16.406202 | orchestrator | 2025-09-27 21:50:16 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:50:16.406249 | orchestrator | 2025-09-27 21:50:16 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:50:19.447827 | orchestrator | 2025-09-27 21:50:19 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:50:19.448283 | orchestrator | 2025-09-27 21:50:19 | INFO  | Task d86a5ecd-a397-4235-a00d-8c74b8216240 is in state STARTED 2025-09-27 21:50:19.449257 | orchestrator | 2025-09-27 21:50:19 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:50:19.451078 | orchestrator | 2025-09-27 21:50:19 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:50:19.451112 | orchestrator | 2025-09-27 21:50:19 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:50:22.492590 | orchestrator | 2025-09-27 21:50:22 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:50:22.492741 | orchestrator | 2025-09-27 21:50:22 | INFO  | Task d86a5ecd-a397-4235-a00d-8c74b8216240 is in state STARTED 2025-09-27 21:50:22.494235 | orchestrator | 2025-09-27 21:50:22 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:50:22.495483 | orchestrator | 2025-09-27 21:50:22 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:50:22.495512 | orchestrator | 2025-09-27 21:50:22 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:50:25.553287 | orchestrator | 2025-09-27 21:50:25 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:50:25.554209 | orchestrator | 2025-09-27 21:50:25 | INFO  | Task d86a5ecd-a397-4235-a00d-8c74b8216240 is in state STARTED 2025-09-27 21:50:25.555777 | orchestrator | 2025-09-27 21:50:25 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state STARTED 2025-09-27 21:50:25.556901 | orchestrator | 2025-09-27 21:50:25 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:50:25.556935 | orchestrator | 2025-09-27 21:50:25 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:50:28.587855 | orchestrator | 2025-09-27 21:50:28 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:50:28.591704 | orchestrator | 2025-09-27 21:50:28 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:50:28.593836 | orchestrator | 2025-09-27 21:50:28 | INFO  | Task d86a5ecd-a397-4235-a00d-8c74b8216240 is in state STARTED 2025-09-27 21:50:28.597891 | orchestrator | 2025-09-27 21:50:28 | INFO  | Task 4afb9192-9605-441b-a17d-bfcb8e1e0d77 is in state SUCCESS 2025-09-27 21:50:28.600081 | orchestrator | 2025-09-27 21:50:28.600115 | orchestrator | 2025-09-27 21:50:28.600133 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 21:50:28.600155 | orchestrator | 2025-09-27 21:50:28.600174 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 21:50:28.600192 | orchestrator | Saturday 27 September 2025 21:49:18 +0000 (0:00:00.254) 0:00:00.254 **** 2025-09-27 21:50:28.600265 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:50:28.600333 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:50:28.600395 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:50:28.600407 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:50:28.600427 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:50:28.600439 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:50:28.600450 | orchestrator | ok: [testbed-manager] 2025-09-27 21:50:28.600481 | orchestrator | 2025-09-27 21:50:28.600493 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 21:50:28.600504 | orchestrator | Saturday 27 September 2025 21:49:19 +0000 (0:00:00.873) 0:00:01.128 **** 2025-09-27 21:50:28.600530 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-09-27 21:50:28.600542 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-09-27 21:50:28.600553 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-09-27 21:50:28.600564 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-09-27 21:50:28.600575 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-09-27 21:50:28.600585 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-09-27 21:50:28.600598 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-09-27 21:50:28.600609 | orchestrator | 2025-09-27 21:50:28.600620 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-27 21:50:28.600631 | orchestrator | 2025-09-27 21:50:28.600642 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-09-27 21:50:28.600653 | orchestrator | Saturday 27 September 2025 21:49:19 +0000 (0:00:00.653) 0:00:01.781 **** 2025-09-27 21:50:28.600665 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2025-09-27 21:50:28.600712 | orchestrator | 2025-09-27 21:50:28.600726 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-09-27 21:50:28.600740 | orchestrator | Saturday 27 September 2025 21:49:21 +0000 (0:00:01.977) 0:00:03.762 **** 2025-09-27 21:50:28.600750 | orchestrator | changed: [testbed-node-3] => (item=swift (object-store)) 2025-09-27 21:50:28.600761 | orchestrator | 2025-09-27 21:50:28.600772 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-09-27 21:50:28.600783 | orchestrator | Saturday 27 September 2025 21:49:25 +0000 (0:00:03.985) 0:00:07.747 **** 2025-09-27 21:50:28.600795 | orchestrator | changed: [testbed-node-3] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-09-27 21:50:28.600829 | orchestrator | changed: [testbed-node-3] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-09-27 21:50:28.600840 | orchestrator | 2025-09-27 21:50:28.600851 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-09-27 21:50:28.600862 | orchestrator | Saturday 27 September 2025 21:49:32 +0000 (0:00:06.818) 0:00:14.566 **** 2025-09-27 21:50:28.600873 | orchestrator | ok: [testbed-node-3] => (item=service) 2025-09-27 21:50:28.600884 | orchestrator | 2025-09-27 21:50:28.600895 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-09-27 21:50:28.600905 | orchestrator | Saturday 27 September 2025 21:49:36 +0000 (0:00:03.839) 0:00:18.406 **** 2025-09-27 21:50:28.600916 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-27 21:50:28.600927 | orchestrator | changed: [testbed-node-3] => (item=ceph_rgw -> service) 2025-09-27 21:50:28.600937 | orchestrator | 2025-09-27 21:50:28.600948 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-09-27 21:50:28.600959 | orchestrator | Saturday 27 September 2025 21:49:40 +0000 (0:00:03.719) 0:00:22.125 **** 2025-09-27 21:50:28.600970 | orchestrator | ok: [testbed-node-3] => (item=admin) 2025-09-27 21:50:28.600981 | orchestrator | changed: [testbed-node-3] => (item=ResellerAdmin) 2025-09-27 21:50:28.600991 | orchestrator | 2025-09-27 21:50:28.601002 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-09-27 21:50:28.601013 | orchestrator | Saturday 27 September 2025 21:49:46 +0000 (0:00:06.560) 0:00:28.685 **** 2025-09-27 21:50:28.601024 | orchestrator | changed: [testbed-node-3] => (item=ceph_rgw -> service -> admin) 2025-09-27 21:50:28.601034 | orchestrator | 2025-09-27 21:50:28.601045 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:50:28.601056 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:50:28.601067 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:50:28.601078 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:50:28.601089 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:50:28.601100 | orchestrator | testbed-node-3 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:50:28.601124 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:50:28.601136 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:50:28.601147 | orchestrator | 2025-09-27 21:50:28.601157 | orchestrator | 2025-09-27 21:50:28.601168 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:50:28.601179 | orchestrator | Saturday 27 September 2025 21:49:52 +0000 (0:00:05.171) 0:00:33.857 **** 2025-09-27 21:50:28.601190 | orchestrator | =============================================================================== 2025-09-27 21:50:28.601201 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.82s 2025-09-27 21:50:28.601211 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.56s 2025-09-27 21:50:28.601222 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.17s 2025-09-27 21:50:28.601239 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.99s 2025-09-27 21:50:28.601250 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.84s 2025-09-27 21:50:28.601261 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.72s 2025-09-27 21:50:28.601279 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.98s 2025-09-27 21:50:28.601290 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.87s 2025-09-27 21:50:28.601300 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.65s 2025-09-27 21:50:28.601311 | orchestrator | 2025-09-27 21:50:28.601321 | orchestrator | 2025-09-27 21:50:28.601332 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 21:50:28.601343 | orchestrator | 2025-09-27 21:50:28.601353 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 21:50:28.601364 | orchestrator | Saturday 27 September 2025 21:48:08 +0000 (0:00:00.290) 0:00:00.290 **** 2025-09-27 21:50:28.601374 | orchestrator | ok: [testbed-manager] 2025-09-27 21:50:28.601385 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:50:28.601396 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:50:28.601407 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:50:28.601417 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:50:28.601428 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:50:28.601438 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:50:28.601449 | orchestrator | 2025-09-27 21:50:28.601460 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 21:50:28.601470 | orchestrator | Saturday 27 September 2025 21:48:09 +0000 (0:00:00.776) 0:00:01.066 **** 2025-09-27 21:50:28.601481 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-09-27 21:50:28.601492 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-09-27 21:50:28.601502 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-09-27 21:50:28.601513 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-09-27 21:50:28.601524 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-09-27 21:50:28.601534 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-09-27 21:50:28.601545 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-09-27 21:50:28.601555 | orchestrator | 2025-09-27 21:50:28.601566 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-09-27 21:50:28.601576 | orchestrator | 2025-09-27 21:50:28.601587 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-27 21:50:28.601598 | orchestrator | Saturday 27 September 2025 21:48:10 +0000 (0:00:00.654) 0:00:01.721 **** 2025-09-27 21:50:28.601609 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:50:28.601621 | orchestrator | 2025-09-27 21:50:28.601632 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-09-27 21:50:28.601643 | orchestrator | Saturday 27 September 2025 21:48:11 +0000 (0:00:01.445) 0:00:03.167 **** 2025-09-27 21:50:28.601656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 21:50:28.601670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 21:50:28.601706 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-27 21:50:28.601730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 21:50:28.601742 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 21:50:28.601754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.601766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.601777 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 21:50:28.601789 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 21:50:28.601800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.601824 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 21:50:28.601841 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.601853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.601864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.601876 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.601888 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.601899 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.601921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.601934 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.601949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.601962 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-27 21:50:28.601975 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.601987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.601998 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.602108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.602126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.602143 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.602154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.602165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.602177 | orchestrator | 2025-09-27 21:50:28.602188 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-27 21:50:28.602199 | orchestrator | Saturday 27 September 2025 21:48:14 +0000 (0:00:02.558) 0:00:05.726 **** 2025-09-27 21:50:28.602210 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:50:28.602221 | orchestrator | 2025-09-27 21:50:28.602232 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-09-27 21:50:28.602243 | orchestrator | Saturday 27 September 2025 21:48:15 +0000 (0:00:01.324) 0:00:07.050 **** 2025-09-27 21:50:28.602255 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-27 21:50:28.602274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 21:50:28.602292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 21:50:28.602308 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 21:50:28.602320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 21:50:28.602331 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 21:50:28.602342 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 21:50:28.602353 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 21:50:28.602377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.602389 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.602407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.602424 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.602436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.602447 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.602459 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.602470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.602488 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.602505 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-27 21:50:28.602522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.602534 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.602545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.602557 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.602568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.602586 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.602597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.602615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.602631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.602643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.602654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.602672 | orchestrator | 2025-09-27 21:50:28.602749 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-09-27 21:50:28.602774 | orchestrator | Saturday 27 September 2025 21:48:21 +0000 (0:00:05.794) 0:00:12.844 **** 2025-09-27 21:50:28.602882 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-27 21:50:28.602899 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 21:50:28.602937 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 21:50:28.602989 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-27 21:50:28.603003 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:50:28.603058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 21:50:28.603081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:50:28.603099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:50:28.603110 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:50:28.603122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 21:50:28.603133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:50:28.603151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 21:50:28.603175 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:50:28.603192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 21:50:28.603204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:50:28.603215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:50:28.603242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:50:28.603254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 21:50:28.603266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:50:28.603277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:50:28.603363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 21:50:28.603392 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:50:28.603420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:50:28.603436 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:50:28.603448 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 21:50:28.603468 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 21:50:28.603480 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-27 21:50:28.603491 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:50:28.603502 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 21:50:28.603513 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 21:50:28.603525 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-27 21:50:28.603559 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:50:28.603572 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 21:50:28.603721 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 21:50:28.603743 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-27 21:50:28.603754 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:50:28.603765 | orchestrator | 2025-09-27 21:50:28.603776 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-09-27 21:50:28.603787 | orchestrator | Saturday 27 September 2025 21:48:23 +0000 (0:00:02.290) 0:00:15.135 **** 2025-09-27 21:50:28.603798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 21:50:28.603810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:50:28.603821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:50:28.603833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 21:50:28.603885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:50:28.603908 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-27 21:50:28.603927 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 21:50:28.603938 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 21:50:28.603950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 21:50:28.603961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:50:28.603972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:50:28.603990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 21:50:28.604002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:50:28.604024 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-27 21:50:28.604037 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:50:28.604048 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:50:28.604059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 21:50:28.604070 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:50:28.604082 | orchestrator | skipping: [testbed-manager] 2025-09-27 21:50:28.604093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:50:28.604104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:50:28.605403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 21:50:28.605436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-27 21:50:28.605457 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:50:28.605472 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 21:50:28.605483 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 21:50:28.605493 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-27 21:50:28.605503 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:50:28.605514 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 21:50:28.605524 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 21:50:28.605534 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-27 21:50:28.605544 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:50:28.605565 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-27 21:50:28.605598 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-27 21:50:28.605616 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-27 21:50:28.605633 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:50:28.605649 | orchestrator | 2025-09-27 21:50:28.605666 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-09-27 21:50:28.605703 | orchestrator | Saturday 27 September 2025 21:48:25 +0000 (0:00:02.520) 0:00:17.655 **** 2025-09-27 21:50:28.605721 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-27 21:50:28.605737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 21:50:28.605748 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 21:50:28.605758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 21:50:28.605787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 21:50:28.605811 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 21:50:28.605828 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 21:50:28.605846 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 21:50:28.605862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.605880 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.605896 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.605907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.605935 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.605951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.605961 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.605971 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.605981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.605992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.606005 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-27 21:50:28.606070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.606088 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.606101 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.606111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.606121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.606131 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.606142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.606157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.606171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.606181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.606192 | orchestrator | 2025-09-27 21:50:28.606202 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-09-27 21:50:28.606211 | orchestrator | Saturday 27 September 2025 21:48:32 +0000 (0:00:06.479) 0:00:24.135 **** 2025-09-27 21:50:28.606221 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-27 21:50:28.606231 | orchestrator | 2025-09-27 21:50:28.606241 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-09-27 21:50:28.606251 | orchestrator | Saturday 27 September 2025 21:48:33 +0000 (0:00:00.877) 0:00:25.013 **** 2025-09-27 21:50:28.606261 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 850618, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.349296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606347 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 850618, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.349296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606366 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 850637, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3537593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606382 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 850618, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.349296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606406 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 850637, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3537593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606417 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 850618, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.349296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606432 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 850618, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.349296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606442 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 850615, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3477116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606452 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 850618, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.349296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:50:28.606462 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 850615, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3477116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606477 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 850637, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3537593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606492 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 850618, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.349296, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606503 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 850637, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3537593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606516 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 850637, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3537593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606526 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 850630, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.352382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606536 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 850637, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3537593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606551 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 850630, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.352382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606561 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 850615, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3477116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606571 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 850615, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3477116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606586 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 850612, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3469694, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606600 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 850615, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3477116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606611 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 850630, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.352382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606620 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 850615, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3477116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606636 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 850612, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3469694, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606645 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 850620, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3497164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606655 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 850630, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.352382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606670 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 850612, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3469694, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606702 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 850630, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.352382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606713 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 850628, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3519616, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606723 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 850637, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3537593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:50:28.606738 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 850620, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3497164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606748 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 850630, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.352382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606758 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 850612, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3469694, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606775 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 850612, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3469694, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606789 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 850620, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3497164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606799 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 850620, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3497164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606810 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 850622, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3506823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606825 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 850612, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3469694, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606835 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 850628, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3519616, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606845 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 850628, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3519616, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606860 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 850620, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3497164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606875 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 850628, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3519616, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606885 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 850616, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3488505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606900 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 850635, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3534355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606910 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 850620, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3497164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606920 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 850622, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3506823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606930 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 850615, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3477116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:50:28.606945 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 850622, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3506823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606962 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 850622, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3506823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606972 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 850610, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3464816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606987 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 850628, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3519616, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.606997 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 850628, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3519616, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607007 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 850653, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3598723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607017 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 850616, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3488505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607032 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 850616, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3488505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607046 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 850622, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3506823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607057 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 850616, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3488505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607071 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 850622, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3506823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607081 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 850635, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3534355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607091 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 850635, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3534355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607101 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 850633, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3531954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607116 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 850616, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3488505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607130 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 850610, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3464816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607141 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 850616, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3488505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607155 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 850635, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3534355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607166 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 850653, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3598723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607176 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 850635, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3534355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607186 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 850635, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3534355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607201 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 850610, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3464816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607215 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 850610, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3464816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607230 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 850610, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3464816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607240 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 850613, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3472486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607250 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 850633, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3531954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607260 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 850630, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.352382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:50:28.607270 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 850653, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3598723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607286 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 850653, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3598723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607300 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 850610, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3464816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607315 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 850611, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3467371, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607325 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 850633, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3531954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607335 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 850653, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3598723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607345 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 850626, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3516707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607355 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 850633, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3531954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607370 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 850613, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3472486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607384 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 850613, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3472486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607399 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 850625, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3506823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607409 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 850653, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3598723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607419 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 850612, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3469694, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:50:28.607429 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 850613, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3472486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607439 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 850611, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3467371, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607455 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 850651, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3593771, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607471 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:50:28.607485 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 850633, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3531954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607495 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 850611, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3467371, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607505 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 850633, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3531954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607515 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 850611, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3467371, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607525 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 850613, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3472486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607535 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 850626, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3516707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607550 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 850626, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3516707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607572 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 850613, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3472486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607583 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 850611, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3467371, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607593 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 850625, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3506823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607602 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 850611, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3467371, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607613 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 850626, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3516707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607622 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 850626, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3516707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607638 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 850651, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3593771, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607654 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:50:28.607668 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 850625, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3506823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607689 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 850620, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3497164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:50:28.607700 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 850625, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3506823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607714 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 850626, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3516707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607731 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 850625, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3506823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607749 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 850651, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3593771, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607774 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:50:28.607800 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 850625, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3506823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607824 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 850651, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3593771, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607843 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:50:28.607861 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 850651, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3593771, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607877 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:50:28.607887 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 850651, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3593771, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-27 21:50:28.607897 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:50:28.607907 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 850628, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3519616, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:50:28.607917 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 850622, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3506823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:50:28.607927 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 850616, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3488505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:50:28.608006 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 850635, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3534355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:50:28.608023 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 850610, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3464816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:50:28.608034 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 850653, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3598723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:50:28.608044 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 850633, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3531954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:50:28.608054 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 850613, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3472486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:50:28.608064 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 850611, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3467371, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:50:28.608079 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 850626, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3516707, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:50:28.608095 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 850625, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3506823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:50:28.608110 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 850651, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3593771, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-27 21:50:28.608120 | orchestrator | 2025-09-27 21:50:28.608130 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-09-27 21:50:28.608140 | orchestrator | Saturday 27 September 2025 21:48:56 +0000 (0:00:22.685) 0:00:47.699 **** 2025-09-27 21:50:28.608150 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-27 21:50:28.608159 | orchestrator | 2025-09-27 21:50:28.608169 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-09-27 21:50:28.608179 | orchestrator | Saturday 27 September 2025 21:48:56 +0000 (0:00:00.819) 0:00:48.519 **** 2025-09-27 21:50:28.608188 | orchestrator | [WARNING]: Skipped 2025-09-27 21:50:28.608198 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-27 21:50:28.608208 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-09-27 21:50:28.608218 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-27 21:50:28.608227 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-09-27 21:50:28.608237 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-27 21:50:28.608247 | orchestrator | [WARNING]: Skipped 2025-09-27 21:50:28.608256 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-27 21:50:28.608266 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-09-27 21:50:28.608275 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-27 21:50:28.608285 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-09-27 21:50:28.608295 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-27 21:50:28.608305 | orchestrator | [WARNING]: Skipped 2025-09-27 21:50:28.608314 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-27 21:50:28.608324 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-09-27 21:50:28.608334 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-27 21:50:28.608343 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-09-27 21:50:28.608353 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-27 21:50:28.608363 | orchestrator | [WARNING]: Skipped 2025-09-27 21:50:28.608378 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-27 21:50:28.608388 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-09-27 21:50:28.608397 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-27 21:50:28.608407 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-09-27 21:50:28.608417 | orchestrator | [WARNING]: Skipped 2025-09-27 21:50:28.608426 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-27 21:50:28.608436 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-09-27 21:50:28.608445 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-27 21:50:28.608455 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-09-27 21:50:28.608465 | orchestrator | [WARNING]: Skipped 2025-09-27 21:50:28.608474 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-27 21:50:28.608484 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-09-27 21:50:28.608493 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-27 21:50:28.608503 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-09-27 21:50:28.608513 | orchestrator | [WARNING]: Skipped 2025-09-27 21:50:28.608522 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-27 21:50:28.608532 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-09-27 21:50:28.608542 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-27 21:50:28.608551 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-09-27 21:50:28.608561 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-27 21:50:28.608570 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-27 21:50:28.608580 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-27 21:50:28.608589 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-27 21:50:28.608599 | orchestrator | 2025-09-27 21:50:28.608608 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-09-27 21:50:28.608618 | orchestrator | Saturday 27 September 2025 21:48:59 +0000 (0:00:02.611) 0:00:51.130 **** 2025-09-27 21:50:28.608628 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-27 21:50:28.608637 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:50:28.608651 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-27 21:50:28.608661 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-27 21:50:28.608671 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:50:28.608730 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:50:28.608740 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-27 21:50:28.608750 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:50:28.608759 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-27 21:50:28.608769 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:50:28.608779 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-27 21:50:28.608789 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:50:28.608834 | orchestrator | fatal: [testbed-manager]: FAILED! => {"msg": "{{ prometheus_blackbox_exporter_endpoints_default | selectattr('enabled', 'true') | map(attribute='endpoints') | flatten | union(prometheus_blackbox_exporter_endpoints_custom) | unique | select | list }}: [{'endpoints': ['aodh:os_endpoint:{{ aodh_public_endpoint }}', \"{{ ('aodh_internal:os_endpoint:' + aodh_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_aodh | bool }}'}, {'endpoints': ['barbican:os_endpoint:{{ barbican_public_endpoint }}', \"{{ ('barbican_internal:os_endpoint:' + barbican_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_barbican | bool }}'}, {'endpoints': ['blazar:os_endpoint:{{ blazar_public_base_endpoint }}', \"{{ ('blazar_internal:os_endpoint:' + blazar_internal_base_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_blazar | bool }}'}, {'endpoints': ['ceph_rgw:http_2xx:{{ ceph_rgw_public_base_endpoint }}', \"{{ ('ceph_rgw_internal:http_2xx:' + ceph_rgw_internal_base_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_ceph_rgw | bool }}'}, {'endpoints': ['cinder:os_endpoint:{{ cinder_public_base_endpoint }}', \"{{ ('cinder_internal:os_endpoint:' + cinder_internal_base_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_cinder | bool }}'}, {'endpoints': ['cloudkitty:os_endpoint:{{ cloudkitty_public_endpoint }}', \"{{ ('cloudkitty_internal:os_endpoint:' + cloudkitty_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_cloudkitty | bool }}'}, {'endpoints': ['designate:os_endpoint:{{ designate_public_endpoint }}', \"{{ ('designate_internal:os_endpoint:' + designate_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_designate | bool }}'}, {'endpoints': ['glance:os_endpoint:{{ glance_public_endpoint }}', \"{{ ('glance_internal:os_endpoint:' + glance_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_glance | bool }}'}, {'endpoints': ['gnocchi:os_endpoint:{{ gnocchi_public_endpoint }}', \"{{ ('gnocchi_internal:os_endpoint:' + gnocchi_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_gnocchi | bool }}'}, {'endpoints': ['heat:os_endpoint:{{ heat_public_base_endpoint }}', \"{{ ('heat_internal:os_endpoint:' + heat_internal_base_endpoint) if not kolla_same_external_internal_vip | bool }}\", 'heat_cfn:os_endpoint:{{ heat_cfn_public_base_endpoint }}', \"{{ ('heat_cfn_internal:os_endpoint:' + heat_cfn_internal_base_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_heat | bool }}'}, {'endpoints': ['horizon:http_2xx:{{ horizon_public_endpoint }}', \"{{ ('horizon_internal:http_2xx:' + horizon_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_horizon | bool }}'}, {'endpoints': ['ironic:os_endpoint:{{ ironic_public_endpoint }}', \"{{ ('ironic_internal:os_endpoint:' + ironic_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\", 'ironic_inspector:os_endpoint:{{ ironic_inspector_public_endpoint }}', \"{{ ('ironic_inspector_internal:os_endpoint:' + ironic_inspector_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_ironic | bool }}'}, {'endpoints': ['keystone:os_endpoint:{{ keystone_public_url }}', \"{{ ('keystone_internal:os_endpoint:' + keystone_internal_url) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_keystone | bool }}'}, {'endpoints': ['magnum:os_endpoint:{{ magnum_public_base_endpoint }}', \"{{ ('magnum_internal:os_endpoint:' + magnum_internal_base_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_magnum | bool }}'}, {'endpoints': ['manila:os_endpoint:{{ manila_public_base_endpoint }}', \"{{ ('manila_internal:os_endpoint:' + manila_internal_base_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_manila | bool }}'}, {'endpoints': ['masakari:os_endpoint:{{ masakari_public_endpoint }}', \"{{ ('masakari_internal:os_endpoint:' + masakari_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_masakari | bool }}'}, {'endpoints': ['mistral:os_endpoint:{{ mistral_public_base_endpoint }}', \"{{ ('mistral_internal:os_endpoint:' + mistral_internal_base_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_mistral | bool }}'}, {'endpoints': ['neutron:os_endpoint:{{ neutron_public_endpoint }}', \"{{ ('neutron_internal:os_endpoint:' + neutron_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_neutron | bool }}'}, {'endpoints': ['nova:os_endpoint:{{ nova_public_base_endpoint }}', \"{{ ('nova_internal:os_endpoint:' + nova_internal_base_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_nova | bool }}'}, {'endpoints': ['octavia:os_endpoint:{{ octavia_public_endpoint }}', \"{{ ('octavia_internal:os_endpoint:' + octavia_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_octavia | bool }}'}, {'endpoints': ['placement:os_endpoint:{{ placement_public_endpoint }}', \"{{ ('placement_internal:os_endpoint:' + placement_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_placement | bool }}'}, {'endpoints': ['skyline_apiserver:os_endpoint:{{ skyline_apiserver_public_endpoint }}', \"{{ ('skyline_apiserver_internal:os_endpoint:' + skyline_apiserver_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\", 'skyline_console:os_endpoint:{{ skyline_console_public_endpoint }}', \"{{ ('skyline_console_internal:os_endpoint:' + skyline_console_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_skyline | bool }}'}, {'endpoints': ['swift:os_endpoint:{{ swift_public_base_endpoint }}', \"{{ ('swift_internal:os_endpoint:' + swift_internal_base_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_swift | bool }}'}, {'endpoints': ['tacker:os_endpoint:{{ tacker_public_endpoint }}', \"{{ ('tacker_internal:os_endpoint:' + tacker_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_tacker | bool }}'}, {'endpoints': ['trove:os_endpoint:{{ trove_public_base_endpoint }}', \"{{ ('trove_internal:os_endpoint:' + trove_internal_base_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_trove | bool }}'}, {'endpoints': ['venus:os_endpoint:{{ venus_public_endpoint }}', \"{{ ('venus_internal:os_endpoint:' + venus_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_venus | bool }}'}, {'endpoints': ['watcher:os_endpoint:{{ watcher_public_endpoint }}', \"{{ ('watcher_internal:os_endpoint:' + watcher_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_watcher | bool }}'}, {'endpoints': ['zun:os_endpoint:{{ zun_public_base_endpoint }}', \"{{ ('zun_internal:os_endpoint:' + zun_internal_base_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_zun | bool }}'}, {'endpoints': \"{% set etcd_endpoints = [] %}{% for host in groups.get('etcd', []) %}{{ etcd_endpoints.append('etcd_' + host + ':http_2xx:' + hostvars[host]['etcd_protocol'] + '://' + ('api' | kolla_address(host) | put_address_in_context('url')) + ':' + hostvars[host]['etcd_client_port'] + '/metrics')}}{% endfor %}{{ etcd_endpoints }}\", 'enabled': '{{ enable_etcd | bool }}'}, {'endpoints': ['grafana:http_2xx:{{ grafana_public_endpoint }}', \"{{ ('grafana_internal:http_2xx:' + grafana_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_grafana | bool }}'}, {'endpoints': ['opensearch:http_2xx:{{ opensearch_internal_endpoint }}'], 'enabled': '{{ enable_opensearch | bool }}'}, {'endpoints': ['opensearch_dashboards:http_2xx_opensearch_dashboards:{{ opensearch_dashboards_internal_endpoint }}/api/status'], 'enabled': '{{ enable_opensearch_dashboards | bool }}'}, {'endpoints': ['opensearch_dashboards_external:http_2xx_opensearch_dashboards:{{ opensearch_dashboards_external_endpoint }}/api/status'], 'enabled': '{{ enable_opensearch_dashboards_external | bool }}'}, {'endpoints': ['prometheus:http_2xx_prometheus:{{ prometheus_public_endpoint if enable_prometheus_server_external else prometheus_internal_endpoint }}/-/healthy'], 'enabled': '{{ enable_prometheus | bool }}'}, {'endpoints': ['prometheus_alertmanager:http_2xx_alertmanager:{{ prometheus_alertmanager_public_endpoint if enable_prometheus_alertmanager_external else prometheus_alertmanager_internal_endpoint }}'], 'enabled': '{{ enable_prometheus_alertmanager | bool }}'}, {'endpoints': \"{% set rabbitmq_endpoints = [] %}{% for host in groups.get('rabbitmq', []) %}{{ rabbitmq_endpoints.append('rabbitmq_' + host + (':tls_connect:' if rabbitmq_enable_tls | bool else ':tcp_connect:') + ('api' | kolla_address(host) | put_address_in_context('url')) + ':' + hostvars[host]['rabbitmq_port'] ) }}{% endfor %}{{ rabbitmq_endpoints }}\", 'enabled': '{{ enable_rabbitmq | bool }}'}, {'endpoints': \"{% set redis_endpoints = [] %}{% for host in groups.get('redis', []) %}{{ redis_endpoints.append('redis_' + host + ':tcp_connect:' + ('api' | kolla_address(host) | put_address_in_context('url')) + ':' + hostvars[host]['redis_port']) }}{% endfor %}{{ redis_endpoints }}\", 'enabled': '{{ enable_redis | bool }}'}]: 'swift_public_base_endpoint' is undefined"} 2025-09-27 21:50:28.608870 | orchestrator | 2025-09-27 21:50:28.608880 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-09-27 21:50:28.608890 | orchestrator | Saturday 27 September 2025 21:49:09 +0000 (0:00:10.472) 0:01:01.603 **** 2025-09-27 21:50:28.608900 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-27 21:50:28.608910 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:50:28.608920 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-27 21:50:28.608929 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:50:28.608939 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-27 21:50:28.608949 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:50:28.608959 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-27 21:50:28.608968 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:50:28.608978 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-27 21:50:28.608987 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:50:28.608997 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-27 21:50:28.609007 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:50:28.609016 | orchestrator | 2025-09-27 21:50:28.609026 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-09-27 21:50:28.609036 | orchestrator | Saturday 27 September 2025 21:49:11 +0000 (0:00:01.290) 0:01:02.893 **** 2025-09-27 21:50:28.609046 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-27 21:50:28.609055 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:50:28.609065 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-27 21:50:28.609075 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:50:28.609085 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-27 21:50:28.609094 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:50:28.609104 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-27 21:50:28.609118 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:50:28.609128 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-27 21:50:28.609138 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:50:28.609148 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-27 21:50:28.609163 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:50:28.609172 | orchestrator | 2025-09-27 21:50:28.609182 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-09-27 21:50:28.609192 | orchestrator | Saturday 27 September 2025 21:49:12 +0000 (0:00:01.119) 0:01:04.013 **** 2025-09-27 21:50:28.609202 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-27 21:50:28.609211 | orchestrator | 2025-09-27 21:50:28.609225 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-09-27 21:50:28.609235 | orchestrator | Saturday 27 September 2025 21:49:13 +0000 (0:00:00.877) 0:01:04.890 **** 2025-09-27 21:50:28.609244 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:50:28.609254 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:50:28.609262 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:50:28.609270 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:50:28.609278 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:50:28.609286 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:50:28.609294 | orchestrator | 2025-09-27 21:50:28.609302 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-09-27 21:50:28.609309 | orchestrator | Saturday 27 September 2025 21:49:13 +0000 (0:00:00.600) 0:01:05.491 **** 2025-09-27 21:50:28.609317 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:50:28.609325 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:50:28.609333 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:50:28.609341 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:50:28.609349 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:50:28.609357 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:50:28.609365 | orchestrator | 2025-09-27 21:50:28.609372 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-09-27 21:50:28.609380 | orchestrator | Saturday 27 September 2025 21:49:15 +0000 (0:00:02.028) 0:01:07.519 **** 2025-09-27 21:50:28.609388 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-27 21:50:28.609396 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:50:28.609404 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-27 21:50:28.609412 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:50:28.609420 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-27 21:50:28.609428 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:50:28.609436 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-27 21:50:28.609443 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:50:28.609451 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-27 21:50:28.609459 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:50:28.609467 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-27 21:50:28.609475 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:50:28.609483 | orchestrator | 2025-09-27 21:50:28.609491 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-09-27 21:50:28.609498 | orchestrator | Saturday 27 September 2025 21:49:17 +0000 (0:00:01.502) 0:01:09.022 **** 2025-09-27 21:50:28.609506 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-27 21:50:28.609514 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-27 21:50:28.609522 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:50:28.609530 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:50:28.609538 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-27 21:50:28.609546 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:50:28.609554 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-27 21:50:28.609566 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:50:28.609574 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-27 21:50:28.609582 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:50:28.609590 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-27 21:50:28.609598 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:50:28.609606 | orchestrator | 2025-09-27 21:50:28.609614 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-09-27 21:50:28.609622 | orchestrator | Saturday 27 September 2025 21:49:18 +0000 (0:00:01.418) 0:01:10.441 **** 2025-09-27 21:50:28.609630 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:50:28.609637 | orchestrator | 2025-09-27 21:50:28.609645 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-09-27 21:50:28.609653 | orchestrator | Saturday 27 September 2025 21:49:19 +0000 (0:00:00.905) 0:01:11.346 **** 2025-09-27 21:50:28.609661 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:50:28.609669 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:50:28.609688 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:50:28.609696 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:50:28.609704 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:50:28.609712 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:50:28.609720 | orchestrator | 2025-09-27 21:50:28.609731 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-09-27 21:50:28.609740 | orchestrator | Saturday 27 September 2025 21:49:20 +0000 (0:00:00.613) 0:01:11.960 **** 2025-09-27 21:50:28.609748 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:50:28.609755 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:50:28.609763 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:50:28.609771 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:50:28.609779 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:50:28.609787 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:50:28.609795 | orchestrator | 2025-09-27 21:50:28.609802 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-09-27 21:50:28.609811 | orchestrator | Saturday 27 September 2025 21:49:20 +0000 (0:00:00.715) 0:01:12.675 **** 2025-09-27 21:50:28.609822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 21:50:28.609831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 21:50:28.609839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 21:50:28.609853 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 21:50:28.609861 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 21:50:28.609870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.609879 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-27 21:50:28.609891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.609902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.609911 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.609919 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.609933 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.609941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.609949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.609957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.609970 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.609981 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.609990 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.609998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.610011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.610046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-27 21:50:28.610055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.610067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.610076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-27 21:50:28.610084 | orchestrator | 2025-09-27 21:50:28.610092 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-09-27 21:50:28.610104 | orchestrator | Saturday 27 September 2025 21:49:25 +0000 (0:00:04.397) 0:01:17.073 **** 2025-09-27 21:50:28.610112 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-09-27 21:50:28.610120 | orchestrator | 2025-09-27 21:50:28.610128 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-27 21:50:28.610136 | orchestrator | Saturday 27 September 2025 21:49:28 +0000 (0:00:03.410) 0:01:20.483 **** 2025-09-27 21:50:28.610144 | orchestrator | 2025-09-27 21:50:28.610152 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-27 21:50:28.610160 | orchestrator | Saturday 27 September 2025 21:49:28 +0000 (0:00:00.059) 0:01:20.543 **** 2025-09-27 21:50:28.610168 | orchestrator | 2025-09-27 21:50:28.610181 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-27 21:50:28.610189 | orchestrator | Saturday 27 September 2025 21:49:28 +0000 (0:00:00.057) 0:01:20.600 **** 2025-09-27 21:50:28.610197 | orchestrator | 2025-09-27 21:50:28.610205 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-27 21:50:28.610213 | orchestrator | Saturday 27 September 2025 21:49:28 +0000 (0:00:00.057) 0:01:20.658 **** 2025-09-27 21:50:28.610220 | orchestrator | 2025-09-27 21:50:28.610228 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-27 21:50:28.610236 | orchestrator | Saturday 27 September 2025 21:49:29 +0000 (0:00:00.057) 0:01:20.715 **** 2025-09-27 21:50:28.610244 | orchestrator | 2025-09-27 21:50:28.610252 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-27 21:50:28.610260 | orchestrator | Saturday 27 September 2025 21:49:29 +0000 (0:00:00.177) 0:01:20.892 **** 2025-09-27 21:50:28.610268 | orchestrator | 2025-09-27 21:50:28.610276 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-09-27 21:50:28.610284 | orchestrator | Saturday 27 September 2025 21:49:29 +0000 (0:00:00.062) 0:01:20.954 **** 2025-09-27 21:50:28.610292 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:50:28.610299 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:50:28.610307 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:50:28.610315 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:50:28.610323 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:50:28.610331 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:50:28.610339 | orchestrator | 2025-09-27 21:50:28.610347 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-09-27 21:50:28.610355 | orchestrator | Saturday 27 September 2025 21:49:40 +0000 (0:00:11.079) 0:01:32.034 **** 2025-09-27 21:50:28.610363 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:50:28.610371 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:50:28.610378 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:50:28.610386 | orchestrator | 2025-09-27 21:50:28.610394 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-09-27 21:50:28.610402 | orchestrator | Saturday 27 September 2025 21:49:50 +0000 (0:00:10.646) 0:01:42.680 **** 2025-09-27 21:50:28.610410 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:50:28.610418 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:50:28.610426 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:50:28.610434 | orchestrator | 2025-09-27 21:50:28.610441 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-09-27 21:50:28.610449 | orchestrator | Saturday 27 September 2025 21:49:57 +0000 (0:00:06.337) 0:01:49.017 **** 2025-09-27 21:50:28.610457 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:50:28.610465 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:50:28.610473 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:50:28.610481 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:50:28.610489 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:50:28.610496 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:50:28.610504 | orchestrator | 2025-09-27 21:50:28.610512 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-09-27 21:50:28.610520 | orchestrator | Saturday 27 September 2025 21:50:10 +0000 (0:00:13.316) 0:02:02.333 **** 2025-09-27 21:50:28.610528 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:50:28.610536 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:50:28.610544 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:50:28.610552 | orchestrator | 2025-09-27 21:50:28.610560 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-09-27 21:50:28.610568 | orchestrator | Saturday 27 September 2025 21:50:20 +0000 (0:00:09.989) 0:02:12.323 **** 2025-09-27 21:50:28.610575 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:50:28.610583 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:50:28.610591 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:50:28.610599 | orchestrator | 2025-09-27 21:50:28.610612 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:50:28.610620 | orchestrator | testbed-manager : ok=11  changed=4  unreachable=0 failed=1  skipped=2  rescued=0 ignored=0 2025-09-27 21:50:28.610628 | orchestrator | testbed-node-0 : ok=17  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-27 21:50:28.610640 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-27 21:50:28.610648 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-27 21:50:28.610656 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-27 21:50:28.610664 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-27 21:50:28.610687 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-27 21:50:28.610695 | orchestrator | 2025-09-27 21:50:28.610703 | orchestrator | 2025-09-27 21:50:28.610711 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:50:28.610719 | orchestrator | Saturday 27 September 2025 21:50:26 +0000 (0:00:06.049) 0:02:18.372 **** 2025-09-27 21:50:28.610728 | orchestrator | =============================================================================== 2025-09-27 21:50:28.610736 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 22.69s 2025-09-27 21:50:28.610744 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.32s 2025-09-27 21:50:28.610751 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 11.08s 2025-09-27 21:50:28.610759 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.65s 2025-09-27 21:50:28.610767 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 10.47s 2025-09-27 21:50:28.610775 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 9.99s 2025-09-27 21:50:28.610783 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.48s 2025-09-27 21:50:28.610791 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 6.34s 2025-09-27 21:50:28.610799 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 6.05s 2025-09-27 21:50:28.610807 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.79s 2025-09-27 21:50:28.610815 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.40s 2025-09-27 21:50:28.610823 | orchestrator | prometheus : Creating prometheus database user and setting permissions --- 3.41s 2025-09-27 21:50:28.610831 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.61s 2025-09-27 21:50:28.610839 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.56s 2025-09-27 21:50:28.610846 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.52s 2025-09-27 21:50:28.610854 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 2.29s 2025-09-27 21:50:28.610862 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.03s 2025-09-27 21:50:28.610870 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 1.50s 2025-09-27 21:50:28.610878 | orchestrator | prometheus : include_tasks ---------------------------------------------- 1.45s 2025-09-27 21:50:28.610886 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 1.42s 2025-09-27 21:50:28.610894 | orchestrator | 2025-09-27 21:50:28 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:50:28.610907 | orchestrator | 2025-09-27 21:50:28 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:50:31.638215 | orchestrator | 2025-09-27 21:50:31 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:50:31.638737 | orchestrator | 2025-09-27 21:50:31 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:50:31.639399 | orchestrator | 2025-09-27 21:50:31 | INFO  | Task d86a5ecd-a397-4235-a00d-8c74b8216240 is in state STARTED 2025-09-27 21:50:31.640303 | orchestrator | 2025-09-27 21:50:31 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:50:31.640326 | orchestrator | 2025-09-27 21:50:31 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:50:34.694597 | orchestrator | 2025-09-27 21:50:34 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:50:34.694696 | orchestrator | 2025-09-27 21:50:34 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:50:34.695370 | orchestrator | 2025-09-27 21:50:34 | INFO  | Task d86a5ecd-a397-4235-a00d-8c74b8216240 is in state STARTED 2025-09-27 21:50:34.696050 | orchestrator | 2025-09-27 21:50:34 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:50:34.696137 | orchestrator | 2025-09-27 21:50:34 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:50:37.726282 | orchestrator | 2025-09-27 21:50:37 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:50:37.726658 | orchestrator | 2025-09-27 21:50:37 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:50:37.728001 | orchestrator | 2025-09-27 21:50:37 | INFO  | Task d86a5ecd-a397-4235-a00d-8c74b8216240 is in state STARTED 2025-09-27 21:50:37.728723 | orchestrator | 2025-09-27 21:50:37 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:50:37.728918 | orchestrator | 2025-09-27 21:50:37 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:50:40.757333 | orchestrator | 2025-09-27 21:50:40 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:50:40.758775 | orchestrator | 2025-09-27 21:50:40 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:50:40.759739 | orchestrator | 2025-09-27 21:50:40 | INFO  | Task d86a5ecd-a397-4235-a00d-8c74b8216240 is in state STARTED 2025-09-27 21:50:40.760603 | orchestrator | 2025-09-27 21:50:40 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:50:40.760633 | orchestrator | 2025-09-27 21:50:40 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:50:43.789948 | orchestrator | 2025-09-27 21:50:43 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:50:43.790365 | orchestrator | 2025-09-27 21:50:43 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:50:43.791081 | orchestrator | 2025-09-27 21:50:43 | INFO  | Task d86a5ecd-a397-4235-a00d-8c74b8216240 is in state STARTED 2025-09-27 21:50:43.791854 | orchestrator | 2025-09-27 21:50:43 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:50:43.791983 | orchestrator | 2025-09-27 21:50:43 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:50:46.814231 | orchestrator | 2025-09-27 21:50:46 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:50:46.814592 | orchestrator | 2025-09-27 21:50:46 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:50:46.815522 | orchestrator | 2025-09-27 21:50:46 | INFO  | Task d86a5ecd-a397-4235-a00d-8c74b8216240 is in state STARTED 2025-09-27 21:50:46.816340 | orchestrator | 2025-09-27 21:50:46 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:50:46.816368 | orchestrator | 2025-09-27 21:50:46 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:50:49.845500 | orchestrator | 2025-09-27 21:50:49 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:50:49.846098 | orchestrator | 2025-09-27 21:50:49 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:50:49.848087 | orchestrator | 2025-09-27 21:50:49 | INFO  | Task d86a5ecd-a397-4235-a00d-8c74b8216240 is in state SUCCESS 2025-09-27 21:50:49.849834 | orchestrator | 2025-09-27 21:50:49.849871 | orchestrator | 2025-09-27 21:50:49.849883 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 21:50:49.849900 | orchestrator | 2025-09-27 21:50:49.849920 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 21:50:49.849939 | orchestrator | Saturday 27 September 2025 21:49:51 +0000 (0:00:00.332) 0:00:00.332 **** 2025-09-27 21:50:49.849950 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:50:49.849962 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:50:49.849973 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:50:49.849984 | orchestrator | 2025-09-27 21:50:49.849995 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 21:50:49.850006 | orchestrator | Saturday 27 September 2025 21:49:51 +0000 (0:00:00.402) 0:00:00.735 **** 2025-09-27 21:50:49.850060 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-09-27 21:50:49.850075 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-09-27 21:50:49.850086 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-09-27 21:50:49.850119 | orchestrator | 2025-09-27 21:50:49.850131 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-09-27 21:50:49.850142 | orchestrator | 2025-09-27 21:50:49.850152 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-27 21:50:49.850163 | orchestrator | Saturday 27 September 2025 21:49:52 +0000 (0:00:00.546) 0:00:01.281 **** 2025-09-27 21:50:49.850174 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:50:49.850186 | orchestrator | 2025-09-27 21:50:49.850197 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-09-27 21:50:49.850208 | orchestrator | Saturday 27 September 2025 21:49:53 +0000 (0:00:00.964) 0:00:02.245 **** 2025-09-27 21:50:49.850219 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-09-27 21:50:49.850231 | orchestrator | 2025-09-27 21:50:49.850241 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-09-27 21:50:49.850252 | orchestrator | Saturday 27 September 2025 21:49:56 +0000 (0:00:03.824) 0:00:06.070 **** 2025-09-27 21:50:49.850263 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-09-27 21:50:49.850274 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-09-27 21:50:49.850285 | orchestrator | 2025-09-27 21:50:49.850296 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-09-27 21:50:49.850306 | orchestrator | Saturday 27 September 2025 21:50:03 +0000 (0:00:06.887) 0:00:12.957 **** 2025-09-27 21:50:49.850317 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-27 21:50:49.850329 | orchestrator | 2025-09-27 21:50:49.850340 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-09-27 21:50:49.850354 | orchestrator | Saturday 27 September 2025 21:50:07 +0000 (0:00:03.651) 0:00:16.609 **** 2025-09-27 21:50:49.850372 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-27 21:50:49.850424 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-09-27 21:50:49.850445 | orchestrator | 2025-09-27 21:50:49.850479 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-09-27 21:50:49.850499 | orchestrator | Saturday 27 September 2025 21:50:11 +0000 (0:00:04.220) 0:00:20.830 **** 2025-09-27 21:50:49.850519 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-27 21:50:49.850538 | orchestrator | 2025-09-27 21:50:49.850559 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-09-27 21:50:49.850578 | orchestrator | Saturday 27 September 2025 21:50:15 +0000 (0:00:03.381) 0:00:24.211 **** 2025-09-27 21:50:49.850600 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-09-27 21:50:49.850620 | orchestrator | 2025-09-27 21:50:49.850640 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-09-27 21:50:49.850683 | orchestrator | Saturday 27 September 2025 21:50:19 +0000 (0:00:04.273) 0:00:28.485 **** 2025-09-27 21:50:49.850732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-27 21:50:49.850762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-27 21:50:49.850807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-27 21:50:49.850829 | orchestrator | 2025-09-27 21:50:49.850850 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-27 21:50:49.850868 | orchestrator | Saturday 27 September 2025 21:50:23 +0000 (0:00:04.055) 0:00:32.541 **** 2025-09-27 21:50:49.850887 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:50:49.850906 | orchestrator | 2025-09-27 21:50:49.850936 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-09-27 21:50:49.850957 | orchestrator | Saturday 27 September 2025 21:50:24 +0000 (0:00:00.735) 0:00:33.276 **** 2025-09-27 21:50:49.850973 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:50:49.850984 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:50:49.850995 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:50:49.851006 | orchestrator | 2025-09-27 21:50:49.851017 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-09-27 21:50:49.851028 | orchestrator | Saturday 27 September 2025 21:50:27 +0000 (0:00:03.273) 0:00:36.550 **** 2025-09-27 21:50:49.851039 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-27 21:50:49.851050 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-27 21:50:49.851061 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-27 21:50:49.851071 | orchestrator | 2025-09-27 21:50:49.851082 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-09-27 21:50:49.851093 | orchestrator | Saturday 27 September 2025 21:50:28 +0000 (0:00:01.492) 0:00:38.042 **** 2025-09-27 21:50:49.851104 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-27 21:50:49.851115 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-27 21:50:49.851133 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-27 21:50:49.851144 | orchestrator | 2025-09-27 21:50:49.851156 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-09-27 21:50:49.851166 | orchestrator | Saturday 27 September 2025 21:50:30 +0000 (0:00:01.260) 0:00:39.303 **** 2025-09-27 21:50:49.851177 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:50:49.851188 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:50:49.851199 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:50:49.851210 | orchestrator | 2025-09-27 21:50:49.851221 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-09-27 21:50:49.851231 | orchestrator | Saturday 27 September 2025 21:50:30 +0000 (0:00:00.631) 0:00:39.935 **** 2025-09-27 21:50:49.851242 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:50:49.851253 | orchestrator | 2025-09-27 21:50:49.851264 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-09-27 21:50:49.851275 | orchestrator | Saturday 27 September 2025 21:50:30 +0000 (0:00:00.230) 0:00:40.165 **** 2025-09-27 21:50:49.851286 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:50:49.851296 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:50:49.851307 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:50:49.851318 | orchestrator | 2025-09-27 21:50:49.851329 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-27 21:50:49.851339 | orchestrator | Saturday 27 September 2025 21:50:31 +0000 (0:00:00.400) 0:00:40.566 **** 2025-09-27 21:50:49.851355 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:50:49.851366 | orchestrator | 2025-09-27 21:50:49.851377 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-09-27 21:50:49.851388 | orchestrator | Saturday 27 September 2025 21:50:32 +0000 (0:00:00.699) 0:00:41.266 **** 2025-09-27 21:50:49.851407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-27 21:50:49.851421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-27 21:50:49.851445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-27 21:50:49.851457 | orchestrator | 2025-09-27 21:50:49.851468 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-09-27 21:50:49.851483 | orchestrator | Saturday 27 September 2025 21:50:36 +0000 (0:00:04.090) 0:00:45.356 **** 2025-09-27 21:50:49.851514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-27 21:50:49.851544 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:50:49.851570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-27 21:50:49.851591 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:50:49.851617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-27 21:50:49.851637 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:50:49.851672 | orchestrator | 2025-09-27 21:50:49.851684 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-09-27 21:50:49.851695 | orchestrator | Saturday 27 September 2025 21:50:39 +0000 (0:00:03.201) 0:00:48.558 **** 2025-09-27 21:50:49.851711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-27 21:50:49.851724 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:50:49.851742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-27 21:50:49.851761 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:50:49.851773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-27 21:50:49.851785 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:50:49.851795 | orchestrator | 2025-09-27 21:50:49.851806 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-09-27 21:50:49.851817 | orchestrator | Saturday 27 September 2025 21:50:42 +0000 (0:00:03.268) 0:00:51.827 **** 2025-09-27 21:50:49.851828 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:50:49.851845 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:50:49.851856 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:50:49.851867 | orchestrator | 2025-09-27 21:50:49.851878 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-09-27 21:50:49.851889 | orchestrator | Saturday 27 September 2025 21:50:46 +0000 (0:00:03.420) 0:00:55.248 **** 2025-09-27 21:50:49.851901 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.errors.AnsibleUndefinedVariable: 'glance_backend_swift' is undefined 2025-09-27 21:50:49.851923 | orchestrator | failed: [testbed-node-1] (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) => {"ansible_loop_var": "item", "changed": false, "item": {"key": "glance-api", "value": {"container_name": "glance_api", "dimensions": {}, "enabled": true, "environment": {"http_proxy": "", "https_proxy": "", "no_proxy": "localhost,127.0.0.1,192.168.16.11,192.168.16.9"}, "group": "glance-api", "haproxy": {"glance_api": {"backend_http_extra": ["timeout server 6h"], "custom_member_list": ["server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5", "server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5", "server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5", ""], "enabled": true, "external": false, "frontend_http_extra": ["timeout client 6h"], "mode": "http", "port": "9292"}, "glance_api_external": {"backend_http_extra": ["timeout server 6h"], "custom_member_list": ["server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5", "server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5", "server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5", ""], "enabled": true, "external": true, "external_fqdn": "api.testbed.osism.xyz", "frontend_http_extra": ["timeout client 6h"], "mode": "http", "port": "9292"}}, "healthcheck": {"interval": "30", "retries": "3", "start_period": "5", "test": ["CMD-SHELL", "healthcheck_curl http://192.168.16.11:9292"], "timeout": "30"}, "host_in_groups": true, "image": "registry.osism.tech/kolla/glance-api:2024.2", "privileged": true, "volumes": ["/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro", "/etc/localtime:/etc/localtime:ro", "/etc/timezone:/etc/timezone:ro", "glance:/var/lib/glance/", "", "kolla_logs:/var/log/kolla/", "", "iscsi_info:/etc/iscsi", "/dev:/dev"]}}, "msg": "AnsibleUndefinedVariable: 'glance_backend_swift' is undefined"} 2025-09-27 21:50:49.851949 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.errors.AnsibleUndefinedVariable: 'glance_backend_swift' is undefined 2025-09-27 21:50:49.851975 | orchestrator | failed: [testbed-node-0] (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) => {"ansible_loop_var": "item", "changed": false, "item": {"key": "glance-api", "value": {"container_name": "glance_api", "dimensions": {}, "enabled": true, "environment": {"http_proxy": "", "https_proxy": "", "no_proxy": "localhost,127.0.0.1,192.168.16.10,192.168.16.9"}, "group": "glance-api", "haproxy": {"glance_api": {"backend_http_extra": ["timeout server 6h"], "custom_member_list": ["server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5", "server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5", "server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5", ""], "enabled": true, "external": false, "frontend_http_extra": ["timeout client 6h"], "mode": "http", "port": "9292"}, "glance_api_external": {"backend_http_extra": ["timeout server 6h"], "custom_member_list": ["server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5", "server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5", "server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5", ""], "enabled": true, "external": true, "external_fqdn": "api.testbed.osism.xyz", "frontend_http_extra": ["timeout client 6h"], "mode": "http", "port": "9292"}}, "healthcheck": {"interval": "30", "retries": "3", "start_period": "5", "test": ["CMD-SHELL", "healthcheck_curl http://192.168.16.10:9292"], "timeout": "30"}, "host_in_groups": true, "image": "registry.osism.tech/kolla/glance-api:2024.2", "privileged": true, "volumes": ["/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro", "/etc/localtime:/etc/localtime:ro", "/etc/timezone:/etc/timezone:ro", "glance:/var/lib/glance/", "", "kolla_logs:/var/log/kolla/", "", "iscsi_info:/etc/iscsi", "/dev:/dev"]}}, "msg": "AnsibleUndefinedVariable: 'glance_backend_swift' is undefined"} 2025-09-27 21:50:49.851994 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ansible.errors.AnsibleUndefinedVariable: 'glance_backend_swift' is undefined 2025-09-27 21:50:49.852019 | orchestrator | failed: [testbed-node-2] (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) => {"ansible_loop_var": "item", "changed": false, "item": {"key": "glance-api", "value": {"container_name": "glance_api", "dimensions": {}, "enabled": true, "environment": {"http_proxy": "", "https_proxy": "", "no_proxy": "localhost,127.0.0.1,192.168.16.12,192.168.16.9"}, "group": "glance-api", "haproxy": {"glance_api": {"backend_http_extra": ["timeout server 6h"], "custom_member_list": ["server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5", "server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5", "server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5", ""], "enabled": true, "external": false, "frontend_http_extra": ["timeout client 6h"], "mode": "http", "port": "9292"}, "glance_api_external": {"backend_http_extra": ["timeout server 6h"], "custom_member_list": ["server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5", "server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5", "server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5", ""], "enabled": true, "external": true, "external_fqdn": "api.testbed.osism.xyz", "frontend_http_extra": ["timeout client 6h"], "mode": "http", "port": "9292"}}, "healthcheck": {"interval": "30", "retries": "3", "start_period": "5", "test": ["CMD-SHELL", "healthcheck_curl http://192.168.16.12:9292"], "timeout": "30"}, "host_in_groups": true, "image": "registry.osism.tech/kolla/glance-api:2024.2", "privileged": true, "volumes": ["/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro", "/etc/localtime:/etc/localtime:ro", "/etc/timezone:/etc/timezone:ro", "glance:/var/lib/glance/", "", "kolla_logs:/var/log/kolla/", "", "iscsi_info:/etc/iscsi", "/dev:/dev"]}}, "msg": "AnsibleUndefinedVariable: 'glance_backend_swift' is undefined"} 2025-09-27 21:50:49.852038 | orchestrator | 2025-09-27 21:50:49.852050 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:50:49.852061 | orchestrator | testbed-node-0 : ok=17  changed=9  unreachable=0 failed=1  skipped=5  rescued=0 ignored=0 2025-09-27 21:50:49.852073 | orchestrator | testbed-node-1 : ok=11  changed=5  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2025-09-27 21:50:49.852084 | orchestrator | testbed-node-2 : ok=11  changed=5  unreachable=0 failed=1  skipped=4  rescued=0 ignored=0 2025-09-27 21:50:49.852095 | orchestrator | 2025-09-27 21:50:49.852106 | orchestrator | 2025-09-27 21:50:49.852117 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:50:49.852128 | orchestrator | Saturday 27 September 2025 21:50:49 +0000 (0:00:03.396) 0:00:58.644 **** 2025-09-27 21:50:49.852139 | orchestrator | =============================================================================== 2025-09-27 21:50:49.852150 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.89s 2025-09-27 21:50:49.852161 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.27s 2025-09-27 21:50:49.852176 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.22s 2025-09-27 21:50:49.852187 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.09s 2025-09-27 21:50:49.852198 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.06s 2025-09-27 21:50:49.852209 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.82s 2025-09-27 21:50:49.852220 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.65s 2025-09-27 21:50:49.852231 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.42s 2025-09-27 21:50:49.852242 | orchestrator | glance : Copying over config.json files for services -------------------- 3.40s 2025-09-27 21:50:49.852253 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.38s 2025-09-27 21:50:49.852270 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.27s 2025-09-27 21:50:49.852281 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.27s 2025-09-27 21:50:49.852292 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.20s 2025-09-27 21:50:49.852303 | orchestrator | glance : Copy over multiple ceph configs for Glance --------------------- 1.49s 2025-09-27 21:50:49.852313 | orchestrator | glance : Copy over ceph Glance keyrings --------------------------------- 1.26s 2025-09-27 21:50:49.852324 | orchestrator | glance : include_tasks -------------------------------------------------- 0.96s 2025-09-27 21:50:49.852335 | orchestrator | glance : include_tasks -------------------------------------------------- 0.74s 2025-09-27 21:50:49.852346 | orchestrator | glance : include_tasks -------------------------------------------------- 0.70s 2025-09-27 21:50:49.852357 | orchestrator | glance : Ensuring config directory has correct owner and permission ----- 0.63s 2025-09-27 21:50:49.852368 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.55s 2025-09-27 21:50:49.852379 | orchestrator | 2025-09-27 21:50:49 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:50:49.852390 | orchestrator | 2025-09-27 21:50:49 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:50:52.877053 | orchestrator | 2025-09-27 21:50:52 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:50:52.877162 | orchestrator | 2025-09-27 21:50:52 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:50:52.877834 | orchestrator | 2025-09-27 21:50:52 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:50:52.879572 | orchestrator | 2025-09-27 21:50:52 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:50:52.879598 | orchestrator | 2025-09-27 21:50:52 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:50:55.916966 | orchestrator | 2025-09-27 21:50:55 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:50:55.917074 | orchestrator | 2025-09-27 21:50:55 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:50:55.917089 | orchestrator | 2025-09-27 21:50:55 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:50:55.917860 | orchestrator | 2025-09-27 21:50:55 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:50:55.917885 | orchestrator | 2025-09-27 21:50:55 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:50:58.965472 | orchestrator | 2025-09-27 21:50:58 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:50:58.966136 | orchestrator | 2025-09-27 21:50:58 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:50:58.968789 | orchestrator | 2025-09-27 21:50:58 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:50:58.969753 | orchestrator | 2025-09-27 21:50:58 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:50:58.969780 | orchestrator | 2025-09-27 21:50:58 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:51:01.999922 | orchestrator | 2025-09-27 21:51:02 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:51:02.000568 | orchestrator | 2025-09-27 21:51:02 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:51:02.001574 | orchestrator | 2025-09-27 21:51:02 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:51:02.002370 | orchestrator | 2025-09-27 21:51:02 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:51:02.002427 | orchestrator | 2025-09-27 21:51:02 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:51:05.041797 | orchestrator | 2025-09-27 21:51:05 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:51:05.042768 | orchestrator | 2025-09-27 21:51:05 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:51:05.044559 | orchestrator | 2025-09-27 21:51:05 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:51:05.045994 | orchestrator | 2025-09-27 21:51:05 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:51:05.046071 | orchestrator | 2025-09-27 21:51:05 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:51:08.073903 | orchestrator | 2025-09-27 21:51:08 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:51:08.075336 | orchestrator | 2025-09-27 21:51:08 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:51:08.076080 | orchestrator | 2025-09-27 21:51:08 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:51:08.077281 | orchestrator | 2025-09-27 21:51:08 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:51:08.077452 | orchestrator | 2025-09-27 21:51:08 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:51:11.108003 | orchestrator | 2025-09-27 21:51:11 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:51:11.108758 | orchestrator | 2025-09-27 21:51:11 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:51:11.109594 | orchestrator | 2025-09-27 21:51:11 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:51:11.110183 | orchestrator | 2025-09-27 21:51:11 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state STARTED 2025-09-27 21:51:11.110227 | orchestrator | 2025-09-27 21:51:11 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:51:14.159100 | orchestrator | 2025-09-27 21:51:14 | INFO  | Task f7757824-03de-46e0-a452-c6ab25ff43a5 is in state STARTED 2025-09-27 21:51:14.160911 | orchestrator | 2025-09-27 21:51:14 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:51:14.162759 | orchestrator | 2025-09-27 21:51:14 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:51:14.164792 | orchestrator | 2025-09-27 21:51:14 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:51:14.166342 | orchestrator | 2025-09-27 21:51:14 | INFO  | Task 26ad0ab0-44cb-4fa4-baab-9cac654dc0ac is in state SUCCESS 2025-09-27 21:51:14.166367 | orchestrator | 2025-09-27 21:51:14 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:51:17.221263 | orchestrator | 2025-09-27 21:51:17 | INFO  | Task f7757824-03de-46e0-a452-c6ab25ff43a5 is in state STARTED 2025-09-27 21:51:17.223163 | orchestrator | 2025-09-27 21:51:17 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:51:17.224646 | orchestrator | 2025-09-27 21:51:17 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:51:17.226365 | orchestrator | 2025-09-27 21:51:17 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:51:17.226683 | orchestrator | 2025-09-27 21:51:17 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:51:20.281515 | orchestrator | 2025-09-27 21:51:20 | INFO  | Task f7757824-03de-46e0-a452-c6ab25ff43a5 is in state STARTED 2025-09-27 21:51:20.284090 | orchestrator | 2025-09-27 21:51:20 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:51:20.286180 | orchestrator | 2025-09-27 21:51:20 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:51:20.288299 | orchestrator | 2025-09-27 21:51:20 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:51:20.288370 | orchestrator | 2025-09-27 21:51:20 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:51:23.339308 | orchestrator | 2025-09-27 21:51:23 | INFO  | Task f7757824-03de-46e0-a452-c6ab25ff43a5 is in state STARTED 2025-09-27 21:51:23.340559 | orchestrator | 2025-09-27 21:51:23 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:51:23.342654 | orchestrator | 2025-09-27 21:51:23 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:51:23.344468 | orchestrator | 2025-09-27 21:51:23 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:51:23.344492 | orchestrator | 2025-09-27 21:51:23 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:51:26.399704 | orchestrator | 2025-09-27 21:51:26 | INFO  | Task f7757824-03de-46e0-a452-c6ab25ff43a5 is in state STARTED 2025-09-27 21:51:26.401720 | orchestrator | 2025-09-27 21:51:26 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:51:26.403525 | orchestrator | 2025-09-27 21:51:26 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:51:26.404990 | orchestrator | 2025-09-27 21:51:26 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:51:26.405024 | orchestrator | 2025-09-27 21:51:26 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:51:29.450363 | orchestrator | 2025-09-27 21:51:29 | INFO  | Task f7757824-03de-46e0-a452-c6ab25ff43a5 is in state STARTED 2025-09-27 21:51:29.455754 | orchestrator | 2025-09-27 21:51:29 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:51:29.457972 | orchestrator | 2025-09-27 21:51:29 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:51:29.460436 | orchestrator | 2025-09-27 21:51:29 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:51:29.460472 | orchestrator | 2025-09-27 21:51:29 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:51:32.504086 | orchestrator | 2025-09-27 21:51:32 | INFO  | Task f7757824-03de-46e0-a452-c6ab25ff43a5 is in state STARTED 2025-09-27 21:51:32.505175 | orchestrator | 2025-09-27 21:51:32 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:51:32.506548 | orchestrator | 2025-09-27 21:51:32 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:51:32.508839 | orchestrator | 2025-09-27 21:51:32 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:51:32.509293 | orchestrator | 2025-09-27 21:51:32 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:51:35.549853 | orchestrator | 2025-09-27 21:51:35 | INFO  | Task f7757824-03de-46e0-a452-c6ab25ff43a5 is in state STARTED 2025-09-27 21:51:35.550848 | orchestrator | 2025-09-27 21:51:35 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:51:35.552555 | orchestrator | 2025-09-27 21:51:35 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:51:35.554075 | orchestrator | 2025-09-27 21:51:35 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:51:35.554107 | orchestrator | 2025-09-27 21:51:35 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:51:38.600737 | orchestrator | 2025-09-27 21:51:38 | INFO  | Task f7757824-03de-46e0-a452-c6ab25ff43a5 is in state STARTED 2025-09-27 21:51:38.602239 | orchestrator | 2025-09-27 21:51:38 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:51:38.603863 | orchestrator | 2025-09-27 21:51:38 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:51:38.605962 | orchestrator | 2025-09-27 21:51:38 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:51:38.606001 | orchestrator | 2025-09-27 21:51:38 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:51:41.635966 | orchestrator | 2025-09-27 21:51:41 | INFO  | Task f7757824-03de-46e0-a452-c6ab25ff43a5 is in state STARTED 2025-09-27 21:51:41.637954 | orchestrator | 2025-09-27 21:51:41 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:51:41.643171 | orchestrator | 2025-09-27 21:51:41 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:51:41.644740 | orchestrator | 2025-09-27 21:51:41 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:51:41.644990 | orchestrator | 2025-09-27 21:51:41 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:51:44.681912 | orchestrator | 2025-09-27 21:51:44 | INFO  | Task f7757824-03de-46e0-a452-c6ab25ff43a5 is in state STARTED 2025-09-27 21:51:44.683124 | orchestrator | 2025-09-27 21:51:44 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:51:44.685057 | orchestrator | 2025-09-27 21:51:44 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:51:44.686430 | orchestrator | 2025-09-27 21:51:44 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:51:44.686473 | orchestrator | 2025-09-27 21:51:44 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:51:47.731864 | orchestrator | 2025-09-27 21:51:47 | INFO  | Task f7757824-03de-46e0-a452-c6ab25ff43a5 is in state STARTED 2025-09-27 21:51:47.733288 | orchestrator | 2025-09-27 21:51:47 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:51:47.735017 | orchestrator | 2025-09-27 21:51:47 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:51:47.736394 | orchestrator | 2025-09-27 21:51:47 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:51:47.736595 | orchestrator | 2025-09-27 21:51:47 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:51:50.782432 | orchestrator | 2025-09-27 21:51:50 | INFO  | Task f7757824-03de-46e0-a452-c6ab25ff43a5 is in state STARTED 2025-09-27 21:51:50.783647 | orchestrator | 2025-09-27 21:51:50 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:51:50.785045 | orchestrator | 2025-09-27 21:51:50 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:51:50.786337 | orchestrator | 2025-09-27 21:51:50 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:51:50.786367 | orchestrator | 2025-09-27 21:51:50 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:51:53.830980 | orchestrator | 2025-09-27 21:51:53 | INFO  | Task f7757824-03de-46e0-a452-c6ab25ff43a5 is in state STARTED 2025-09-27 21:51:53.832489 | orchestrator | 2025-09-27 21:51:53 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:51:53.834638 | orchestrator | 2025-09-27 21:51:53 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:51:53.837545 | orchestrator | 2025-09-27 21:51:53 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:51:53.837984 | orchestrator | 2025-09-27 21:51:53 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:51:56.876788 | orchestrator | 2025-09-27 21:51:56 | INFO  | Task f7757824-03de-46e0-a452-c6ab25ff43a5 is in state STARTED 2025-09-27 21:51:56.877984 | orchestrator | 2025-09-27 21:51:56 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:51:56.879377 | orchestrator | 2025-09-27 21:51:56 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:51:56.881643 | orchestrator | 2025-09-27 21:51:56 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:51:56.881675 | orchestrator | 2025-09-27 21:51:56 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:51:59.925139 | orchestrator | 2025-09-27 21:51:59 | INFO  | Task f7757824-03de-46e0-a452-c6ab25ff43a5 is in state STARTED 2025-09-27 21:51:59.926731 | orchestrator | 2025-09-27 21:51:59 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:51:59.928706 | orchestrator | 2025-09-27 21:51:59 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:51:59.930089 | orchestrator | 2025-09-27 21:51:59 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:51:59.930124 | orchestrator | 2025-09-27 21:51:59 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:52:02.966159 | orchestrator | 2025-09-27 21:52:02 | INFO  | Task f7757824-03de-46e0-a452-c6ab25ff43a5 is in state STARTED 2025-09-27 21:52:02.966618 | orchestrator | 2025-09-27 21:52:02 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:52:02.967685 | orchestrator | 2025-09-27 21:52:02 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:52:02.969696 | orchestrator | 2025-09-27 21:52:02 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:52:02.969713 | orchestrator | 2025-09-27 21:52:02 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:52:06.025241 | orchestrator | 2025-09-27 21:52:06 | INFO  | Task f7757824-03de-46e0-a452-c6ab25ff43a5 is in state STARTED 2025-09-27 21:52:06.026592 | orchestrator | 2025-09-27 21:52:06 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:52:06.028223 | orchestrator | 2025-09-27 21:52:06 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:52:06.029566 | orchestrator | 2025-09-27 21:52:06 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:52:06.029614 | orchestrator | 2025-09-27 21:52:06 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:52:09.075123 | orchestrator | 2025-09-27 21:52:09 | INFO  | Task f7757824-03de-46e0-a452-c6ab25ff43a5 is in state STARTED 2025-09-27 21:52:09.076581 | orchestrator | 2025-09-27 21:52:09 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:52:09.078415 | orchestrator | 2025-09-27 21:52:09 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:52:09.081042 | orchestrator | 2025-09-27 21:52:09 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:52:09.081070 | orchestrator | 2025-09-27 21:52:09 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:52:12.120919 | orchestrator | 2025-09-27 21:52:12 | INFO  | Task f7757824-03de-46e0-a452-c6ab25ff43a5 is in state STARTED 2025-09-27 21:52:12.123249 | orchestrator | 2025-09-27 21:52:12 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:52:12.123292 | orchestrator | 2025-09-27 21:52:12 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:52:12.123298 | orchestrator | 2025-09-27 21:52:12 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:52:12.123302 | orchestrator | 2025-09-27 21:52:12 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:52:15.170226 | orchestrator | 2025-09-27 21:52:15 | INFO  | Task f7757824-03de-46e0-a452-c6ab25ff43a5 is in state SUCCESS 2025-09-27 21:52:15.170449 | orchestrator | 2025-09-27 21:52:15.170464 | orchestrator | 2025-09-27 21:52:15.170469 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-09-27 21:52:15.170474 | orchestrator | 2025-09-27 21:52:15.170478 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-09-27 21:52:15.170482 | orchestrator | Saturday 27 September 2025 21:44:47 +0000 (0:00:00.186) 0:00:00.186 **** 2025-09-27 21:52:15.170486 | orchestrator | changed: [localhost] 2025-09-27 21:52:15.170492 | orchestrator | 2025-09-27 21:52:15.170496 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-09-27 21:52:15.170500 | orchestrator | Saturday 27 September 2025 21:44:48 +0000 (0:00:01.266) 0:00:01.453 **** 2025-09-27 21:52:15.170504 | orchestrator | 2025-09-27 21:52:15.170507 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-27 21:52:15.170511 | orchestrator | 2025-09-27 21:52:15.170549 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-27 21:52:15.170554 | orchestrator | 2025-09-27 21:52:15.170558 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-27 21:52:15.170561 | orchestrator | 2025-09-27 21:52:15.170565 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-27 21:52:15.170569 | orchestrator | 2025-09-27 21:52:15.170573 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-27 21:52:15.170577 | orchestrator | 2025-09-27 21:52:15.170581 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-27 21:52:15.170584 | orchestrator | 2025-09-27 21:52:15.170588 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-27 21:52:15.170592 | orchestrator | changed: [localhost] 2025-09-27 21:52:15.170596 | orchestrator | 2025-09-27 21:52:15.170600 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-09-27 21:52:15.170604 | orchestrator | Saturday 27 September 2025 21:50:45 +0000 (0:05:57.368) 0:05:58.821 **** 2025-09-27 21:52:15.170608 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2025-09-27 21:52:15.170612 | orchestrator | changed: [localhost] 2025-09-27 21:52:15.170616 | orchestrator | 2025-09-27 21:52:15.170620 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 21:52:15.170623 | orchestrator | 2025-09-27 21:52:15.170627 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 21:52:15.170631 | orchestrator | Saturday 27 September 2025 21:51:11 +0000 (0:00:25.097) 0:06:23.919 **** 2025-09-27 21:52:15.170634 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:52:15.170638 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:52:15.170642 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:52:15.170646 | orchestrator | 2025-09-27 21:52:15.170649 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 21:52:15.170653 | orchestrator | Saturday 27 September 2025 21:51:11 +0000 (0:00:00.312) 0:06:24.232 **** 2025-09-27 21:52:15.170657 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-09-27 21:52:15.170661 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-09-27 21:52:15.170682 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-09-27 21:52:15.170686 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-09-27 21:52:15.170690 | orchestrator | 2025-09-27 21:52:15.170712 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-09-27 21:52:15.170716 | orchestrator | skipping: no hosts matched 2025-09-27 21:52:15.170721 | orchestrator | 2025-09-27 21:52:15.170724 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:52:15.170729 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:52:15.170735 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:52:15.170755 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:52:15.170760 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:52:15.170764 | orchestrator | 2025-09-27 21:52:15.170788 | orchestrator | 2025-09-27 21:52:15.170792 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:52:15.170797 | orchestrator | Saturday 27 September 2025 21:51:11 +0000 (0:00:00.422) 0:06:24.654 **** 2025-09-27 21:52:15.170812 | orchestrator | =============================================================================== 2025-09-27 21:52:15.170816 | orchestrator | Download ironic-agent initramfs --------------------------------------- 357.37s 2025-09-27 21:52:15.170820 | orchestrator | Download ironic-agent kernel ------------------------------------------- 25.10s 2025-09-27 21:52:15.170824 | orchestrator | Ensure the destination directory exists --------------------------------- 1.27s 2025-09-27 21:52:15.170828 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2025-09-27 21:52:15.170832 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-09-27 21:52:15.170835 | orchestrator | 2025-09-27 21:52:15.170839 | orchestrator | 2025-09-27 21:52:15.170843 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 21:52:15.170847 | orchestrator | 2025-09-27 21:52:15.170850 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 21:52:15.170854 | orchestrator | Saturday 27 September 2025 21:51:16 +0000 (0:00:00.256) 0:00:00.256 **** 2025-09-27 21:52:15.170858 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:52:15.170862 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:52:15.170865 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:52:15.170869 | orchestrator | 2025-09-27 21:52:15.170873 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 21:52:15.170877 | orchestrator | Saturday 27 September 2025 21:51:16 +0000 (0:00:00.296) 0:00:00.553 **** 2025-09-27 21:52:15.170887 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-09-27 21:52:15.170891 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-09-27 21:52:15.170895 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-09-27 21:52:15.170899 | orchestrator | 2025-09-27 21:52:15.170903 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-09-27 21:52:15.170907 | orchestrator | 2025-09-27 21:52:15.170910 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-27 21:52:15.170914 | orchestrator | Saturday 27 September 2025 21:51:16 +0000 (0:00:00.388) 0:00:00.941 **** 2025-09-27 21:52:15.170918 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:52:15.170922 | orchestrator | 2025-09-27 21:52:15.170926 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-09-27 21:52:15.170930 | orchestrator | Saturday 27 September 2025 21:51:17 +0000 (0:00:00.515) 0:00:01.457 **** 2025-09-27 21:52:15.170933 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-09-27 21:52:15.170937 | orchestrator | 2025-09-27 21:52:15.170941 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-09-27 21:52:15.170945 | orchestrator | Saturday 27 September 2025 21:51:20 +0000 (0:00:03.638) 0:00:05.096 **** 2025-09-27 21:52:15.170954 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-09-27 21:52:15.170958 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-09-27 21:52:15.170962 | orchestrator | 2025-09-27 21:52:15.170965 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-09-27 21:52:15.170970 | orchestrator | Saturday 27 September 2025 21:51:28 +0000 (0:00:07.254) 0:00:12.350 **** 2025-09-27 21:52:15.170973 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-27 21:52:15.170977 | orchestrator | 2025-09-27 21:52:15.170981 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-09-27 21:52:15.170985 | orchestrator | Saturday 27 September 2025 21:51:31 +0000 (0:00:03.487) 0:00:15.838 **** 2025-09-27 21:52:15.170989 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-27 21:52:15.170993 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-27 21:52:15.170997 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-27 21:52:15.171001 | orchestrator | 2025-09-27 21:52:15.171004 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-09-27 21:52:15.171008 | orchestrator | Saturday 27 September 2025 21:51:40 +0000 (0:00:08.980) 0:00:24.818 **** 2025-09-27 21:52:15.171012 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-27 21:52:15.171016 | orchestrator | 2025-09-27 21:52:15.171020 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-09-27 21:52:15.171023 | orchestrator | Saturday 27 September 2025 21:51:44 +0000 (0:00:03.757) 0:00:28.575 **** 2025-09-27 21:52:15.171027 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-27 21:52:15.171031 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-27 21:52:15.171035 | orchestrator | 2025-09-27 21:52:15.171038 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-09-27 21:52:15.171042 | orchestrator | Saturday 27 September 2025 21:51:52 +0000 (0:00:07.834) 0:00:36.410 **** 2025-09-27 21:52:15.171046 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-09-27 21:52:15.171050 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-09-27 21:52:15.171054 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-09-27 21:52:15.171057 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-09-27 21:52:15.171080 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-09-27 21:52:15.171085 | orchestrator | 2025-09-27 21:52:15.171089 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-27 21:52:15.171093 | orchestrator | Saturday 27 September 2025 21:52:08 +0000 (0:00:16.559) 0:00:52.969 **** 2025-09-27 21:52:15.171098 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:52:15.171102 | orchestrator | 2025-09-27 21:52:15.171109 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-09-27 21:52:15.171114 | orchestrator | Saturday 27 September 2025 21:52:09 +0000 (0:00:00.553) 0:00:53.523 **** 2025-09-27 21:52:15.171119 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: keystoneauth1.exceptions.catalog.EndpointNotFound: internal endpoint for compute service in RegionOne region not found 2025-09-27 21:52:15.171135 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible-tmp-1759009930.6149757-6095-110513539512934/AnsiballZ_compute_flavor.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1759009930.6149757-6095-110513539512934/AnsiballZ_compute_flavor.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1759009930.6149757-6095-110513539512934/AnsiballZ_compute_flavor.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.compute_flavor', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.compute_flavor', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_os_nova_flavor_payload_m_9oqs47/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 367, in \n File \"/tmp/ansible_os_nova_flavor_payload_m_9oqs47/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 363, in main\n File \"/tmp/ansible_os_nova_flavor_payload_m_9oqs47/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_nova_flavor_payload_m_9oqs47/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 220, in run\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 88, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/openstack/service_description.py\", line 286, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/identity/base.py\", line 272, in get_endpoint_data\n endpoint_data = service_catalog.endpoint_data_for(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.12/site-packages/keystoneauth1/access/service_catalog.py\", line 459, in endpoint_data_for\n raise exceptions.EndpointNotFound(msg)\nkeystoneauth1.exceptions.catalog.EndpointNotFound: internal endpoint for compute service in RegionOne region not found\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2025-09-27 21:52:15.171148 | orchestrator | 2025-09-27 21:52:15.171152 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:52:15.171156 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-09-27 21:52:15.171164 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:52:15.171168 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:52:15.171172 | orchestrator | 2025-09-27 21:52:15.171176 | orchestrator | 2025-09-27 21:52:15.171181 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:52:15.171185 | orchestrator | Saturday 27 September 2025 21:52:12 +0000 (0:00:03.349) 0:00:56.872 **** 2025-09-27 21:52:15.171189 | orchestrator | =============================================================================== 2025-09-27 21:52:15.171197 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.56s 2025-09-27 21:52:15.171201 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.98s 2025-09-27 21:52:15.171206 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.83s 2025-09-27 21:52:15.171210 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 7.25s 2025-09-27 21:52:15.171214 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.76s 2025-09-27 21:52:15.171218 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.64s 2025-09-27 21:52:15.171238 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.49s 2025-09-27 21:52:15.171242 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.35s 2025-09-27 21:52:15.171247 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.55s 2025-09-27 21:52:15.171251 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.52s 2025-09-27 21:52:15.171255 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.39s 2025-09-27 21:52:15.171259 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2025-09-27 21:52:15.171305 | orchestrator | 2025-09-27 21:52:15 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:52:15.171921 | orchestrator | 2025-09-27 21:52:15 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:52:15.172970 | orchestrator | 2025-09-27 21:52:15 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:52:15.173951 | orchestrator | 2025-09-27 21:52:15 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:52:15.173967 | orchestrator | 2025-09-27 21:52:15 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:52:18.212025 | orchestrator | 2025-09-27 21:52:18 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:52:18.212134 | orchestrator | 2025-09-27 21:52:18 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:52:18.214439 | orchestrator | 2025-09-27 21:52:18 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:52:18.214727 | orchestrator | 2025-09-27 21:52:18 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:52:18.214972 | orchestrator | 2025-09-27 21:52:18 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:52:21.254937 | orchestrator | 2025-09-27 21:52:21 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:52:21.255180 | orchestrator | 2025-09-27 21:52:21 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:52:21.257130 | orchestrator | 2025-09-27 21:52:21 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:52:21.257641 | orchestrator | 2025-09-27 21:52:21 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:52:21.257875 | orchestrator | 2025-09-27 21:52:21 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:52:24.311265 | orchestrator | 2025-09-27 21:52:24 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:52:24.313649 | orchestrator | 2025-09-27 21:52:24 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:52:24.316140 | orchestrator | 2025-09-27 21:52:24 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:52:24.317802 | orchestrator | 2025-09-27 21:52:24 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:52:24.317862 | orchestrator | 2025-09-27 21:52:24 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:52:27.368001 | orchestrator | 2025-09-27 21:52:27 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:52:27.370840 | orchestrator | 2025-09-27 21:52:27 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:52:27.373724 | orchestrator | 2025-09-27 21:52:27 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:52:27.376706 | orchestrator | 2025-09-27 21:52:27 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:52:27.376787 | orchestrator | 2025-09-27 21:52:27 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:52:30.417425 | orchestrator | 2025-09-27 21:52:30 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:52:30.417689 | orchestrator | 2025-09-27 21:52:30 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:52:30.420177 | orchestrator | 2025-09-27 21:52:30 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:52:30.421652 | orchestrator | 2025-09-27 21:52:30 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:52:30.421675 | orchestrator | 2025-09-27 21:52:30 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:52:33.461975 | orchestrator | 2025-09-27 21:52:33 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:52:33.463457 | orchestrator | 2025-09-27 21:52:33 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:52:33.465113 | orchestrator | 2025-09-27 21:52:33 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:52:33.466369 | orchestrator | 2025-09-27 21:52:33 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:52:33.466599 | orchestrator | 2025-09-27 21:52:33 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:52:36.512449 | orchestrator | 2025-09-27 21:52:36 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:52:36.513775 | orchestrator | 2025-09-27 21:52:36 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:52:36.515711 | orchestrator | 2025-09-27 21:52:36 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:52:36.516979 | orchestrator | 2025-09-27 21:52:36 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:52:36.517115 | orchestrator | 2025-09-27 21:52:36 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:52:39.557554 | orchestrator | 2025-09-27 21:52:39 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:52:39.558671 | orchestrator | 2025-09-27 21:52:39 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:52:39.560936 | orchestrator | 2025-09-27 21:52:39 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:52:39.562209 | orchestrator | 2025-09-27 21:52:39 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:52:39.562249 | orchestrator | 2025-09-27 21:52:39 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:52:42.607547 | orchestrator | 2025-09-27 21:52:42 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:52:42.608722 | orchestrator | 2025-09-27 21:52:42 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:52:42.611474 | orchestrator | 2025-09-27 21:52:42 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:52:42.613705 | orchestrator | 2025-09-27 21:52:42 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:52:42.613731 | orchestrator | 2025-09-27 21:52:42 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:52:45.654509 | orchestrator | 2025-09-27 21:52:45 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:52:45.655776 | orchestrator | 2025-09-27 21:52:45 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:52:45.656648 | orchestrator | 2025-09-27 21:52:45 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:52:45.658808 | orchestrator | 2025-09-27 21:52:45 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:52:45.658838 | orchestrator | 2025-09-27 21:52:45 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:52:48.705600 | orchestrator | 2025-09-27 21:52:48 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:52:48.708259 | orchestrator | 2025-09-27 21:52:48 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:52:48.710869 | orchestrator | 2025-09-27 21:52:48 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:52:48.712877 | orchestrator | 2025-09-27 21:52:48 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:52:48.712931 | orchestrator | 2025-09-27 21:52:48 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:52:51.753365 | orchestrator | 2025-09-27 21:52:51 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:52:51.754986 | orchestrator | 2025-09-27 21:52:51 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:52:51.755990 | orchestrator | 2025-09-27 21:52:51 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:52:51.757207 | orchestrator | 2025-09-27 21:52:51 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:52:51.757992 | orchestrator | 2025-09-27 21:52:51 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:52:54.800684 | orchestrator | 2025-09-27 21:52:54 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:52:54.802593 | orchestrator | 2025-09-27 21:52:54 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:52:54.804610 | orchestrator | 2025-09-27 21:52:54 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state STARTED 2025-09-27 21:52:54.806698 | orchestrator | 2025-09-27 21:52:54 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:52:54.806761 | orchestrator | 2025-09-27 21:52:54 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:52:57.847544 | orchestrator | 2025-09-27 21:52:57 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:52:57.848431 | orchestrator | 2025-09-27 21:52:57 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:52:57.852943 | orchestrator | 2025-09-27 21:52:57 | INFO  | Task de1719c3-149e-448a-bed6-2ea52996e6f3 is in state SUCCESS 2025-09-27 21:52:57.853227 | orchestrator | 2025-09-27 21:52:57.855615 | orchestrator | 2025-09-27 21:52:57.855656 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 21:52:57.855666 | orchestrator | 2025-09-27 21:52:57.855676 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 21:52:57.855685 | orchestrator | Saturday 27 September 2025 21:49:57 +0000 (0:00:00.260) 0:00:00.260 **** 2025-09-27 21:52:57.855718 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:52:57.855729 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:52:57.855738 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:52:57.855746 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:52:57.855755 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:52:57.855763 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:52:57.855772 | orchestrator | 2025-09-27 21:52:57.855802 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 21:52:57.855812 | orchestrator | Saturday 27 September 2025 21:49:57 +0000 (0:00:00.805) 0:00:01.066 **** 2025-09-27 21:52:57.855820 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-09-27 21:52:57.855830 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-09-27 21:52:57.855839 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-09-27 21:52:57.855847 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-09-27 21:52:57.855907 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-09-27 21:52:57.855916 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-09-27 21:52:57.855924 | orchestrator | 2025-09-27 21:52:57.855933 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-09-27 21:52:57.855941 | orchestrator | 2025-09-27 21:52:57.855950 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-27 21:52:57.855958 | orchestrator | Saturday 27 September 2025 21:49:59 +0000 (0:00:01.140) 0:00:02.207 **** 2025-09-27 21:52:57.855967 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:52:57.855978 | orchestrator | 2025-09-27 21:52:57.855987 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-09-27 21:52:57.855995 | orchestrator | Saturday 27 September 2025 21:50:00 +0000 (0:00:01.404) 0:00:03.611 **** 2025-09-27 21:52:57.856005 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-09-27 21:52:57.856013 | orchestrator | 2025-09-27 21:52:57.856022 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-09-27 21:52:57.856031 | orchestrator | Saturday 27 September 2025 21:50:04 +0000 (0:00:03.663) 0:00:07.275 **** 2025-09-27 21:52:57.856088 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-09-27 21:52:57.856137 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-09-27 21:52:57.856149 | orchestrator | 2025-09-27 21:52:57.856159 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-09-27 21:52:57.856168 | orchestrator | Saturday 27 September 2025 21:50:11 +0000 (0:00:07.139) 0:00:14.415 **** 2025-09-27 21:52:57.856178 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-27 21:52:57.856188 | orchestrator | 2025-09-27 21:52:57.856253 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-09-27 21:52:57.856271 | orchestrator | Saturday 27 September 2025 21:50:14 +0000 (0:00:03.435) 0:00:17.851 **** 2025-09-27 21:52:57.856304 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-27 21:52:57.856351 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-09-27 21:52:57.856364 | orchestrator | 2025-09-27 21:52:57.856374 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-09-27 21:52:57.856384 | orchestrator | Saturday 27 September 2025 21:50:18 +0000 (0:00:03.972) 0:00:21.824 **** 2025-09-27 21:52:57.856394 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-27 21:52:57.856403 | orchestrator | 2025-09-27 21:52:57.856413 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-09-27 21:52:57.856423 | orchestrator | Saturday 27 September 2025 21:50:22 +0000 (0:00:03.944) 0:00:25.768 **** 2025-09-27 21:52:57.856433 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-09-27 21:52:57.856453 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-09-27 21:52:57.856462 | orchestrator | 2025-09-27 21:52:57.856471 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-09-27 21:52:57.856520 | orchestrator | Saturday 27 September 2025 21:50:30 +0000 (0:00:08.252) 0:00:34.020 **** 2025-09-27 21:52:57.856535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-27 21:52:57.856565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-27 21:52:57.856581 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.856604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-27 21:52:57.856620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.856646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.856674 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.856690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.856705 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.856727 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.856748 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.856764 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.856777 | orchestrator | 2025-09-27 21:52:57.856796 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-27 21:52:57.856810 | orchestrator | Saturday 27 September 2025 21:50:33 +0000 (0:00:02.336) 0:00:36.357 **** 2025-09-27 21:52:57.856824 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:52:57.856838 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:52:57.856851 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:52:57.856864 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:52:57.856876 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:52:57.856889 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:52:57.856901 | orchestrator | 2025-09-27 21:52:57.856914 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-27 21:52:57.856927 | orchestrator | Saturday 27 September 2025 21:50:34 +0000 (0:00:00.765) 0:00:37.122 **** 2025-09-27 21:52:57.856940 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:52:57.856953 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:52:57.856965 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:52:57.856979 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:52:57.856991 | orchestrator | 2025-09-27 21:52:57.857003 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-09-27 21:52:57.857016 | orchestrator | Saturday 27 September 2025 21:50:35 +0000 (0:00:00.970) 0:00:38.093 **** 2025-09-27 21:52:57.857029 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-09-27 21:52:57.857041 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-09-27 21:52:57.857054 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-09-27 21:52:57.857067 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-09-27 21:52:57.857078 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-09-27 21:52:57.857091 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-09-27 21:52:57.857103 | orchestrator | 2025-09-27 21:52:57.857116 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-09-27 21:52:57.857128 | orchestrator | Saturday 27 September 2025 21:50:36 +0000 (0:00:01.595) 0:00:39.689 **** 2025-09-27 21:52:57.857143 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-27 21:52:57.857169 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-27 21:52:57.857179 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-27 21:52:57.857195 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-27 21:52:57.857208 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-27 21:52:57.857224 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-27 21:52:57.857251 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-27 21:52:57.857268 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-27 21:52:57.857292 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-27 21:52:57.857309 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-27 21:52:57.857335 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-27 21:52:57.857351 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-27 21:52:57.857360 | orchestrator | 2025-09-27 21:52:57.857369 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-09-27 21:52:57.857378 | orchestrator | Saturday 27 September 2025 21:50:40 +0000 (0:00:03.818) 0:00:43.507 **** 2025-09-27 21:52:57.857386 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-27 21:52:57.857396 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-27 21:52:57.857405 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-27 21:52:57.857414 | orchestrator | 2025-09-27 21:52:57.857422 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-09-27 21:52:57.857431 | orchestrator | Saturday 27 September 2025 21:50:42 +0000 (0:00:02.026) 0:00:45.534 **** 2025-09-27 21:52:57.857439 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-09-27 21:52:57.857448 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-09-27 21:52:57.857456 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-09-27 21:52:57.857465 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-09-27 21:52:57.857499 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-09-27 21:52:57.857515 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-09-27 21:52:57.857524 | orchestrator | 2025-09-27 21:52:57.857533 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-09-27 21:52:57.857544 | orchestrator | Saturday 27 September 2025 21:50:45 +0000 (0:00:03.254) 0:00:48.788 **** 2025-09-27 21:52:57.857559 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-09-27 21:52:57.857574 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-09-27 21:52:57.857589 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-09-27 21:52:57.857603 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-09-27 21:52:57.857640 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-09-27 21:52:57.857655 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-09-27 21:52:57.857674 | orchestrator | 2025-09-27 21:52:57.857693 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-09-27 21:52:57.857720 | orchestrator | Saturday 27 September 2025 21:50:46 +0000 (0:00:01.055) 0:00:49.843 **** 2025-09-27 21:52:57.857733 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:52:57.857747 | orchestrator | 2025-09-27 21:52:57.857760 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-09-27 21:52:57.857774 | orchestrator | Saturday 27 September 2025 21:50:46 +0000 (0:00:00.184) 0:00:50.028 **** 2025-09-27 21:52:57.857788 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:52:57.857801 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:52:57.857839 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:52:57.857857 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:52:57.857872 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:52:57.857886 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:52:57.857896 | orchestrator | 2025-09-27 21:52:57.857905 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-27 21:52:57.857913 | orchestrator | Saturday 27 September 2025 21:50:47 +0000 (0:00:00.580) 0:00:50.608 **** 2025-09-27 21:52:57.857924 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:52:57.857934 | orchestrator | 2025-09-27 21:52:57.857942 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-09-27 21:52:57.857951 | orchestrator | Saturday 27 September 2025 21:50:48 +0000 (0:00:00.872) 0:00:51.481 **** 2025-09-27 21:52:57.857971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-27 21:52:57.857982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-27 21:52:57.858000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-27 21:52:57.858064 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.858077 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.858093 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.858109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.858125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.858855 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.858964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.858989 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.859026 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.859047 | orchestrator | 2025-09-27 21:52:57.859061 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-09-27 21:52:57.859073 | orchestrator | Saturday 27 September 2025 21:50:51 +0000 (0:00:02.916) 0:00:54.397 **** 2025-09-27 21:52:57.859087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-27 21:52:57.859116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 21:52:57.859138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-27 21:52:57.859150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 21:52:57.859162 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:52:57.859174 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:52:57.859191 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-27 21:52:57.859203 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-27 21:52:57.859214 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:52:57.859226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-27 21:52:57.859252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 21:52:57.859263 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:52:57.859275 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-27 21:52:57.859286 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-27 21:52:57.859297 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:52:57.859314 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-27 21:52:57.859326 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-27 21:52:57.859344 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:52:57.859355 | orchestrator | 2025-09-27 21:52:57.859366 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-09-27 21:52:57.859377 | orchestrator | Saturday 27 September 2025 21:50:52 +0000 (0:00:01.162) 0:00:55.559 **** 2025-09-27 21:52:57.859395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-27 21:52:57.859407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 21:52:57.859419 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:52:57.859431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-27 21:52:57.859449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 21:52:57.859462 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:52:57.859509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-27 21:52:57.859552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 21:52:57.859566 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:52:57.859578 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-27 21:52:57.859590 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-27 21:52:57.859601 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:52:57.859618 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-27 21:52:57.859638 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-27 21:52:57.859649 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:52:57.859667 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-27 21:52:57.859679 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-27 21:52:57.859690 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:52:57.859701 | orchestrator | 2025-09-27 21:52:57.859712 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-09-27 21:52:57.859722 | orchestrator | Saturday 27 September 2025 21:50:53 +0000 (0:00:01.402) 0:00:56.961 **** 2025-09-27 21:52:57.859734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-27 21:52:57.859750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-27 21:52:57.859769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-27 21:52:57.859787 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.859798 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.859810 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.859826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.859849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.859860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.859878 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.859890 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.859901 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.859912 | orchestrator | 2025-09-27 21:52:57.859923 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-09-27 21:52:57.859941 | orchestrator | Saturday 27 September 2025 21:50:56 +0000 (0:00:02.954) 0:00:59.916 **** 2025-09-27 21:52:57.859952 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-27 21:52:57.859963 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:52:57.859974 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-27 21:52:57.859990 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:52:57.860001 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-27 21:52:57.860012 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:52:57.860022 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-27 21:52:57.860033 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-27 21:52:57.860044 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-27 21:52:57.860055 | orchestrator | 2025-09-27 21:52:57.860065 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-09-27 21:52:57.860076 | orchestrator | Saturday 27 September 2025 21:50:58 +0000 (0:00:01.734) 0:01:01.650 **** 2025-09-27 21:52:57.860087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-27 21:52:57.860106 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.860118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-27 21:52:57.860133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-27 21:52:57.860152 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.860169 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.860181 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.860192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.860203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.860226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.860238 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.860250 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.860261 | orchestrator | 2025-09-27 21:52:57.860271 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-09-27 21:52:57.860282 | orchestrator | Saturday 27 September 2025 21:51:05 +0000 (0:00:07.200) 0:01:08.851 **** 2025-09-27 21:52:57.860299 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:52:57.860310 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:52:57.860321 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:52:57.860332 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:52:57.860342 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:52:57.860353 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:52:57.860364 | orchestrator | 2025-09-27 21:52:57.860374 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-09-27 21:52:57.860385 | orchestrator | Saturday 27 September 2025 21:51:07 +0000 (0:00:01.737) 0:01:10.588 **** 2025-09-27 21:52:57.860396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-27 21:52:57.860416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 21:52:57.860427 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:52:57.860443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-27 21:52:57.860455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 21:52:57.860466 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:52:57.860508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-27 21:52:57.860521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 21:52:57.860539 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:52:57.860551 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-27 21:52:57.860562 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-27 21:52:57.860578 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:52:57.860590 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-27 21:52:57.860601 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-27 21:52:57.860612 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:52:57.860629 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-27 21:52:57.860648 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-27 21:52:57.860659 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:52:57.860670 | orchestrator | 2025-09-27 21:52:57.860681 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-09-27 21:52:57.860691 | orchestrator | Saturday 27 September 2025 21:51:08 +0000 (0:00:01.277) 0:01:11.866 **** 2025-09-27 21:52:57.860702 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:52:57.860713 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:52:57.860724 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:52:57.860734 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:52:57.860745 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:52:57.860756 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:52:57.860766 | orchestrator | 2025-09-27 21:52:57.860777 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-09-27 21:52:57.860788 | orchestrator | Saturday 27 September 2025 21:51:09 +0000 (0:00:00.581) 0:01:12.447 **** 2025-09-27 21:52:57.860804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-27 21:52:57.860816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-27 21:52:57.860834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-27 21:52:57.860852 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.860864 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.860880 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.860891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.860909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.860931 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.860942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.860953 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.860965 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-27 21:52:57.860976 | orchestrator | 2025-09-27 21:52:57.860987 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-27 21:52:57.860998 | orchestrator | Saturday 27 September 2025 21:51:11 +0000 (0:00:02.597) 0:01:15.045 **** 2025-09-27 21:52:57.861009 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:52:57.861020 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:52:57.861030 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:52:57.861041 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:52:57.861051 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:52:57.861062 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:52:57.861072 | orchestrator | 2025-09-27 21:52:57.861083 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-09-27 21:52:57.861094 | orchestrator | Saturday 27 September 2025 21:51:12 +0000 (0:00:00.532) 0:01:15.578 **** 2025-09-27 21:52:57.861104 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:52:57.861121 | orchestrator | 2025-09-27 21:52:57.861132 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-09-27 21:52:57.861143 | orchestrator | Saturday 27 September 2025 21:51:15 +0000 (0:00:02.706) 0:01:18.285 **** 2025-09-27 21:52:57.861153 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:52:57.861164 | orchestrator | 2025-09-27 21:52:57.861174 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-09-27 21:52:57.861185 | orchestrator | Saturday 27 September 2025 21:51:17 +0000 (0:00:02.507) 0:01:20.792 **** 2025-09-27 21:52:57.861196 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:52:57.861207 | orchestrator | 2025-09-27 21:52:57.861217 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-27 21:52:57.861228 | orchestrator | Saturday 27 September 2025 21:51:38 +0000 (0:00:20.848) 0:01:41.641 **** 2025-09-27 21:52:57.861238 | orchestrator | 2025-09-27 21:52:57.861254 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-27 21:52:57.861266 | orchestrator | Saturday 27 September 2025 21:51:38 +0000 (0:00:00.065) 0:01:41.706 **** 2025-09-27 21:52:57.861276 | orchestrator | 2025-09-27 21:52:57.861287 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-27 21:52:57.861298 | orchestrator | Saturday 27 September 2025 21:51:38 +0000 (0:00:00.060) 0:01:41.767 **** 2025-09-27 21:52:57.861316 | orchestrator | 2025-09-27 21:52:57.861333 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-27 21:52:57.861351 | orchestrator | Saturday 27 September 2025 21:51:38 +0000 (0:00:00.065) 0:01:41.832 **** 2025-09-27 21:52:57.861369 | orchestrator | 2025-09-27 21:52:57.861380 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-27 21:52:57.861391 | orchestrator | Saturday 27 September 2025 21:51:38 +0000 (0:00:00.063) 0:01:41.895 **** 2025-09-27 21:52:57.861402 | orchestrator | 2025-09-27 21:52:57.861412 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-27 21:52:57.861423 | orchestrator | Saturday 27 September 2025 21:51:38 +0000 (0:00:00.063) 0:01:41.959 **** 2025-09-27 21:52:57.861433 | orchestrator | 2025-09-27 21:52:57.861444 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-09-27 21:52:57.861455 | orchestrator | Saturday 27 September 2025 21:51:38 +0000 (0:00:00.064) 0:01:42.023 **** 2025-09-27 21:52:57.861465 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:52:57.861500 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:52:57.861512 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:52:57.861522 | orchestrator | 2025-09-27 21:52:57.861533 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-09-27 21:52:57.861587 | orchestrator | Saturday 27 September 2025 21:52:01 +0000 (0:00:22.422) 0:02:04.446 **** 2025-09-27 21:52:57.861599 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:52:57.861610 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:52:57.861620 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:52:57.861631 | orchestrator | 2025-09-27 21:52:57.861642 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-09-27 21:52:57.861653 | orchestrator | Saturday 27 September 2025 21:52:09 +0000 (0:00:08.333) 0:02:12.779 **** 2025-09-27 21:52:57.861663 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:52:57.861674 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:52:57.861685 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:52:57.861695 | orchestrator | 2025-09-27 21:52:57.861706 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-09-27 21:52:57.861717 | orchestrator | Saturday 27 September 2025 21:52:50 +0000 (0:00:40.830) 0:02:53.610 **** 2025-09-27 21:52:57.861728 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:52:57.861738 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:52:57.861749 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:52:57.861759 | orchestrator | 2025-09-27 21:52:57.861770 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-09-27 21:52:57.861781 | orchestrator | Saturday 27 September 2025 21:52:56 +0000 (0:00:05.760) 0:02:59.370 **** 2025-09-27 21:52:57.861800 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:52:57.861811 | orchestrator | 2025-09-27 21:52:57.861822 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:52:57.861833 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-27 21:52:57.861849 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-27 21:52:57.861860 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-27 21:52:57.861871 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-27 21:52:57.861882 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-27 21:52:57.861893 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-27 21:52:57.861903 | orchestrator | 2025-09-27 21:52:57.861914 | orchestrator | 2025-09-27 21:52:57.861925 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:52:57.861935 | orchestrator | Saturday 27 September 2025 21:52:57 +0000 (0:00:00.781) 0:03:00.151 **** 2025-09-27 21:52:57.861946 | orchestrator | =============================================================================== 2025-09-27 21:52:57.861957 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 40.83s 2025-09-27 21:52:57.861968 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 22.42s 2025-09-27 21:52:57.861978 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 20.85s 2025-09-27 21:52:57.861989 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 8.33s 2025-09-27 21:52:57.861999 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.25s 2025-09-27 21:52:57.862010 | orchestrator | cinder : Copying over cinder.conf --------------------------------------- 7.20s 2025-09-27 21:52:57.862078 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.14s 2025-09-27 21:52:57.862090 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 5.76s 2025-09-27 21:52:57.862166 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.97s 2025-09-27 21:52:57.862179 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.94s 2025-09-27 21:52:57.862190 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.82s 2025-09-27 21:52:57.862201 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.66s 2025-09-27 21:52:57.862212 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.44s 2025-09-27 21:52:57.862222 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.25s 2025-09-27 21:52:57.862233 | orchestrator | cinder : Copying over config.json files for services -------------------- 2.95s 2025-09-27 21:52:57.862243 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 2.92s 2025-09-27 21:52:57.862254 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.71s 2025-09-27 21:52:57.862265 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.60s 2025-09-27 21:52:57.862276 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.51s 2025-09-27 21:52:57.862286 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.34s 2025-09-27 21:52:57.862297 | orchestrator | 2025-09-27 21:52:57 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:52:57.862315 | orchestrator | 2025-09-27 21:52:57 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:53:00.903049 | orchestrator | 2025-09-27 21:53:00 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:53:00.904309 | orchestrator | 2025-09-27 21:53:00 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:53:00.906561 | orchestrator | 2025-09-27 21:53:00 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:53:00.906601 | orchestrator | 2025-09-27 21:53:00 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:53:03.947919 | orchestrator | 2025-09-27 21:53:03 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:53:03.948814 | orchestrator | 2025-09-27 21:53:03 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:53:03.949050 | orchestrator | 2025-09-27 21:53:03 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:53:03.949079 | orchestrator | 2025-09-27 21:53:03 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:53:06.996111 | orchestrator | 2025-09-27 21:53:06 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:53:06.997668 | orchestrator | 2025-09-27 21:53:06 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:53:06.999902 | orchestrator | 2025-09-27 21:53:07 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:53:06.999962 | orchestrator | 2025-09-27 21:53:07 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:53:10.041209 | orchestrator | 2025-09-27 21:53:10 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:53:10.042229 | orchestrator | 2025-09-27 21:53:10 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:53:10.043729 | orchestrator | 2025-09-27 21:53:10 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:53:10.043821 | orchestrator | 2025-09-27 21:53:10 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:53:13.096658 | orchestrator | 2025-09-27 21:53:13 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:53:13.098544 | orchestrator | 2025-09-27 21:53:13 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:53:13.101592 | orchestrator | 2025-09-27 21:53:13 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:53:13.101610 | orchestrator | 2025-09-27 21:53:13 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:53:16.157085 | orchestrator | 2025-09-27 21:53:16 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:53:16.160267 | orchestrator | 2025-09-27 21:53:16 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:53:16.161712 | orchestrator | 2025-09-27 21:53:16 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:53:16.162008 | orchestrator | 2025-09-27 21:53:16 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:53:19.207897 | orchestrator | 2025-09-27 21:53:19 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:53:19.208689 | orchestrator | 2025-09-27 21:53:19 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:53:19.210259 | orchestrator | 2025-09-27 21:53:19 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:53:19.210275 | orchestrator | 2025-09-27 21:53:19 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:53:22.258487 | orchestrator | 2025-09-27 21:53:22 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:53:22.259636 | orchestrator | 2025-09-27 21:53:22 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:53:22.261065 | orchestrator | 2025-09-27 21:53:22 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:53:22.261289 | orchestrator | 2025-09-27 21:53:22 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:53:25.304056 | orchestrator | 2025-09-27 21:53:25 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:53:25.305513 | orchestrator | 2025-09-27 21:53:25 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:53:25.307562 | orchestrator | 2025-09-27 21:53:25 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:53:25.307605 | orchestrator | 2025-09-27 21:53:25 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:53:28.352764 | orchestrator | 2025-09-27 21:53:28 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:53:28.354338 | orchestrator | 2025-09-27 21:53:28 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:53:28.356296 | orchestrator | 2025-09-27 21:53:28 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:53:28.356319 | orchestrator | 2025-09-27 21:53:28 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:53:31.396199 | orchestrator | 2025-09-27 21:53:31 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:53:31.398169 | orchestrator | 2025-09-27 21:53:31 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:53:31.400299 | orchestrator | 2025-09-27 21:53:31 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:53:31.400382 | orchestrator | 2025-09-27 21:53:31 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:53:34.442739 | orchestrator | 2025-09-27 21:53:34 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:53:34.444706 | orchestrator | 2025-09-27 21:53:34 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:53:34.447824 | orchestrator | 2025-09-27 21:53:34 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:53:34.447929 | orchestrator | 2025-09-27 21:53:34 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:53:37.491180 | orchestrator | 2025-09-27 21:53:37 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:53:37.491931 | orchestrator | 2025-09-27 21:53:37 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:53:37.493117 | orchestrator | 2025-09-27 21:53:37 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:53:37.493150 | orchestrator | 2025-09-27 21:53:37 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:53:40.534486 | orchestrator | 2025-09-27 21:53:40 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:53:40.535681 | orchestrator | 2025-09-27 21:53:40 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:53:40.537524 | orchestrator | 2025-09-27 21:53:40 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:53:40.537740 | orchestrator | 2025-09-27 21:53:40 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:53:43.582280 | orchestrator | 2025-09-27 21:53:43 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:53:43.584295 | orchestrator | 2025-09-27 21:53:43 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:53:43.585991 | orchestrator | 2025-09-27 21:53:43 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:53:43.586180 | orchestrator | 2025-09-27 21:53:43 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:53:46.627238 | orchestrator | 2025-09-27 21:53:46 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:53:46.629172 | orchestrator | 2025-09-27 21:53:46 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:53:46.630610 | orchestrator | 2025-09-27 21:53:46 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:53:46.630628 | orchestrator | 2025-09-27 21:53:46 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:53:49.673842 | orchestrator | 2025-09-27 21:53:49 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:53:49.674359 | orchestrator | 2025-09-27 21:53:49 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:53:49.675586 | orchestrator | 2025-09-27 21:53:49 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:53:49.675907 | orchestrator | 2025-09-27 21:53:49 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:53:52.720793 | orchestrator | 2025-09-27 21:53:52 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:53:52.722714 | orchestrator | 2025-09-27 21:53:52 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:53:52.723993 | orchestrator | 2025-09-27 21:53:52 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:53:52.724016 | orchestrator | 2025-09-27 21:53:52 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:53:55.765869 | orchestrator | 2025-09-27 21:53:55 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:53:55.767922 | orchestrator | 2025-09-27 21:53:55 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:53:55.769210 | orchestrator | 2025-09-27 21:53:55 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:53:55.769253 | orchestrator | 2025-09-27 21:53:55 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:53:58.802307 | orchestrator | 2025-09-27 21:53:58 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:53:58.804401 | orchestrator | 2025-09-27 21:53:58 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:53:58.806150 | orchestrator | 2025-09-27 21:53:58 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:53:58.806186 | orchestrator | 2025-09-27 21:53:58 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:54:01.852839 | orchestrator | 2025-09-27 21:54:01 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:54:01.855510 | orchestrator | 2025-09-27 21:54:01 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:54:01.858526 | orchestrator | 2025-09-27 21:54:01 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:54:01.858671 | orchestrator | 2025-09-27 21:54:01 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:54:04.911466 | orchestrator | 2025-09-27 21:54:04 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:54:04.913682 | orchestrator | 2025-09-27 21:54:04 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:54:04.915750 | orchestrator | 2025-09-27 21:54:04 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:54:04.916042 | orchestrator | 2025-09-27 21:54:04 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:54:07.954694 | orchestrator | 2025-09-27 21:54:07 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:54:07.954769 | orchestrator | 2025-09-27 21:54:07 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:54:07.955271 | orchestrator | 2025-09-27 21:54:07 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:54:07.955489 | orchestrator | 2025-09-27 21:54:07 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:54:10.996713 | orchestrator | 2025-09-27 21:54:10 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:54:10.999198 | orchestrator | 2025-09-27 21:54:10 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:54:11.001685 | orchestrator | 2025-09-27 21:54:11 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:54:11.002298 | orchestrator | 2025-09-27 21:54:11 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:54:14.042338 | orchestrator | 2025-09-27 21:54:14 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:54:14.043840 | orchestrator | 2025-09-27 21:54:14 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:54:14.046442 | orchestrator | 2025-09-27 21:54:14 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:54:14.046909 | orchestrator | 2025-09-27 21:54:14 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:54:17.089452 | orchestrator | 2025-09-27 21:54:17 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:54:17.092705 | orchestrator | 2025-09-27 21:54:17 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:54:17.096766 | orchestrator | 2025-09-27 21:54:17 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:54:17.097320 | orchestrator | 2025-09-27 21:54:17 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:54:20.140852 | orchestrator | 2025-09-27 21:54:20 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:54:20.142759 | orchestrator | 2025-09-27 21:54:20 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:54:20.143760 | orchestrator | 2025-09-27 21:54:20 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:54:20.143820 | orchestrator | 2025-09-27 21:54:20 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:54:23.185075 | orchestrator | 2025-09-27 21:54:23 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:54:23.186875 | orchestrator | 2025-09-27 21:54:23 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:54:23.188409 | orchestrator | 2025-09-27 21:54:23 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:54:23.188435 | orchestrator | 2025-09-27 21:54:23 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:54:26.230953 | orchestrator | 2025-09-27 21:54:26 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:54:26.232096 | orchestrator | 2025-09-27 21:54:26 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:54:26.233704 | orchestrator | 2025-09-27 21:54:26 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:54:26.233747 | orchestrator | 2025-09-27 21:54:26 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:54:29.273398 | orchestrator | 2025-09-27 21:54:29 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state STARTED 2025-09-27 21:54:29.277066 | orchestrator | 2025-09-27 21:54:29 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:54:29.278420 | orchestrator | 2025-09-27 21:54:29 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:54:29.278743 | orchestrator | 2025-09-27 21:54:29 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:54:32.325280 | orchestrator | 2025-09-27 21:54:32.325406 | orchestrator | 2025-09-27 21:54:32 | INFO  | Task ead3d2a1-261b-4b9a-8ff0-8409b977dbd6 is in state SUCCESS 2025-09-27 21:54:32.327918 | orchestrator | 2025-09-27 21:54:32.327958 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 21:54:32.327971 | orchestrator | 2025-09-27 21:54:32.327983 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 21:54:32.327995 | orchestrator | Saturday 27 September 2025 21:52:17 +0000 (0:00:00.260) 0:00:00.260 **** 2025-09-27 21:54:32.328007 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:54:32.328028 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:54:32.328045 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:54:32.328156 | orchestrator | 2025-09-27 21:54:32.328168 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 21:54:32.328179 | orchestrator | Saturday 27 September 2025 21:52:17 +0000 (0:00:00.282) 0:00:00.542 **** 2025-09-27 21:54:32.328190 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-09-27 21:54:32.328202 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-09-27 21:54:32.328213 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-09-27 21:54:32.328224 | orchestrator | 2025-09-27 21:54:32.328235 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-09-27 21:54:32.328245 | orchestrator | 2025-09-27 21:54:32.328256 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-27 21:54:32.328267 | orchestrator | Saturday 27 September 2025 21:52:18 +0000 (0:00:00.398) 0:00:00.941 **** 2025-09-27 21:54:32.328278 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:54:32.328291 | orchestrator | 2025-09-27 21:54:32.328301 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-09-27 21:54:32.328312 | orchestrator | Saturday 27 September 2025 21:52:18 +0000 (0:00:00.510) 0:00:01.452 **** 2025-09-27 21:54:32.328327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-27 21:54:32.328345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-27 21:54:32.328447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-27 21:54:32.328462 | orchestrator | 2025-09-27 21:54:32.328473 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-09-27 21:54:32.328484 | orchestrator | Saturday 27 September 2025 21:52:19 +0000 (0:00:00.726) 0:00:02.178 **** 2025-09-27 21:54:32.328573 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-09-27 21:54:32.328589 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-09-27 21:54:32.328601 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-27 21:54:32.328614 | orchestrator | 2025-09-27 21:54:32.328625 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-27 21:54:32.328753 | orchestrator | Saturday 27 September 2025 21:52:20 +0000 (0:00:00.791) 0:00:02.969 **** 2025-09-27 21:54:32.328766 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:54:32.328778 | orchestrator | 2025-09-27 21:54:32.328803 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-09-27 21:54:32.328816 | orchestrator | Saturday 27 September 2025 21:52:20 +0000 (0:00:00.704) 0:00:03.673 **** 2025-09-27 21:54:32.328844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-27 21:54:32.328858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-27 21:54:32.328870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-27 21:54:32.328892 | orchestrator | 2025-09-27 21:54:32.328903 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-09-27 21:54:32.328917 | orchestrator | Saturday 27 September 2025 21:52:22 +0000 (0:00:01.377) 0:00:05.051 **** 2025-09-27 21:54:32.328937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-27 21:54:32.328949 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:54:32.328960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-27 21:54:32.328971 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:54:32.328995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-27 21:54:32.329006 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:54:32.329017 | orchestrator | 2025-09-27 21:54:32.329028 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-09-27 21:54:32.329039 | orchestrator | Saturday 27 September 2025 21:52:22 +0000 (0:00:00.351) 0:00:05.402 **** 2025-09-27 21:54:32.329050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-27 21:54:32.329062 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:54:32.329072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-27 21:54:32.329091 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:54:32.329102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-27 21:54:32.329113 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:54:32.329123 | orchestrator | 2025-09-27 21:54:32.329134 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-09-27 21:54:32.329145 | orchestrator | Saturday 27 September 2025 21:52:23 +0000 (0:00:00.863) 0:00:06.266 **** 2025-09-27 21:54:32.329157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-27 21:54:32.329173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-27 21:54:32.329192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-27 21:54:32.329203 | orchestrator | 2025-09-27 21:54:32.329214 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-09-27 21:54:32.329225 | orchestrator | Saturday 27 September 2025 21:52:24 +0000 (0:00:01.316) 0:00:07.582 **** 2025-09-27 21:54:32.329236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-27 21:54:32.329255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-27 21:54:32.329267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-27 21:54:32.329278 | orchestrator | 2025-09-27 21:54:32.329289 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-09-27 21:54:32.329300 | orchestrator | Saturday 27 September 2025 21:52:25 +0000 (0:00:01.260) 0:00:08.843 **** 2025-09-27 21:54:32.329310 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:54:32.329321 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:54:32.329332 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:54:32.329342 | orchestrator | 2025-09-27 21:54:32.329353 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-09-27 21:54:32.329364 | orchestrator | Saturday 27 September 2025 21:52:26 +0000 (0:00:00.458) 0:00:09.301 **** 2025-09-27 21:54:32.329374 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-27 21:54:32.329386 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-27 21:54:32.329397 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-27 21:54:32.329407 | orchestrator | 2025-09-27 21:54:32.329452 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-09-27 21:54:32.329464 | orchestrator | Saturday 27 September 2025 21:52:27 +0000 (0:00:01.324) 0:00:10.626 **** 2025-09-27 21:54:32.329475 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-27 21:54:32.329486 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-27 21:54:32.329502 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-27 21:54:32.329513 | orchestrator | 2025-09-27 21:54:32.329524 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-09-27 21:54:32.329535 | orchestrator | Saturday 27 September 2025 21:52:29 +0000 (0:00:01.292) 0:00:11.918 **** 2025-09-27 21:54:32.329552 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-27 21:54:32.329563 | orchestrator | 2025-09-27 21:54:32.329574 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-09-27 21:54:32.329585 | orchestrator | Saturday 27 September 2025 21:52:29 +0000 (0:00:00.737) 0:00:12.656 **** 2025-09-27 21:54:32.329603 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-09-27 21:54:32.329614 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-09-27 21:54:32.329624 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:54:32.329665 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:54:32.329677 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:54:32.329687 | orchestrator | 2025-09-27 21:54:32.329698 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-09-27 21:54:32.329709 | orchestrator | Saturday 27 September 2025 21:52:30 +0000 (0:00:00.725) 0:00:13.381 **** 2025-09-27 21:54:32.329720 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:54:32.329730 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:54:32.329741 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:54:32.329751 | orchestrator | 2025-09-27 21:54:32.329762 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-09-27 21:54:32.329773 | orchestrator | Saturday 27 September 2025 21:52:31 +0000 (0:00:00.544) 0:00:13.926 **** 2025-09-27 21:54:32.329785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 850482, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2649302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.329798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 850482, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2649302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.329809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 850482, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2649302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.329821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 850545, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.28627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.329844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 850545, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.28627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.329863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 850545, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.28627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.329875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 850514, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2714427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.329886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 850514, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2714427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.329897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 850514, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2714427, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.329909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 850546, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2876813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.329931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 850546, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2876813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.329957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 850546, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2876813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.329969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 850526, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2763896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.329981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 850526, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2763896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.329992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 850526, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2763896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.330003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 850540, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2846813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.330062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 850540, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2846813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.330097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 850540, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2846813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.330110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 850478, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2618706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.330121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 850478, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2618706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.330132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 850478, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2618706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.330143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 850505, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2693496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.330155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 850505, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2693496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.330177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 850505, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2693496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.330835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 850515, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2720203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.330929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 850515, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2720203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.330944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 850515, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2720203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.330956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 850532, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2776814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.330967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 850532, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2776814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 850532, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2776814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 850544, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2858005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 850544, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2858005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 850544, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2858005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 850509, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2696812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 850509, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2696812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 850509, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2696812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 850539, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2806814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 850539, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2806814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 850539, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2806814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 850528, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2776568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 850528, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2776568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 850528, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2776568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 850523, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2748346, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 850523, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2748346, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 850523, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2748346, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 850519, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2743397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 850519, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2743397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 850519, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2743397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 850534, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2796814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 850534, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2796814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 850534, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2796814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 850516, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.273326, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 850516, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.273326, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 850516, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.273326, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 850542, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2846813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 850542, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2846813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 850542, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2846813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 850604, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3447866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 850604, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3447866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 850604, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3447866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 850556, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3166819, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 850556, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3166819, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 850556, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3166819, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 850552, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2926815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 850552, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2926815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 850552, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2926815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 850561, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.319682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 850561, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.319682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 850561, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.319682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 850548, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2892597, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 850548, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2892597, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 850548, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2892597, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 850585, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3368242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 850585, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3368242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 850585, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3368242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 850562, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3326821, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 850562, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3326821, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 850562, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3326821, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 850587, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3374805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 850587, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3374805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 850587, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3374805, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 850600, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.34414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 850600, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.34414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 850600, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.34414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 850582, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3356822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 850582, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3356822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 850582, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3356822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 850559, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.318682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 850559, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.318682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 850559, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.318682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 850554, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3106818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 850554, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3106818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.331997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 850554, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3106818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.332020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 850558, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3176818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.332032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 850558, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3176818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.332043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 850558, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3176818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.332061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 850553, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2966816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.332072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 850553, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2966816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.332084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 850553, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2966816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.332105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 850560, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.319682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.332117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 850560, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.319682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.332128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 850560, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.319682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.332146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 850595, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3433373, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.332158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 850595, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3433373, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.332169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 850590, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.340448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.332190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 850595, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3433373, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.332202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 850590, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.340448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.332220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 850549, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2906816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.332232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 850590, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.340448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.332243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 850549, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2906816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.332254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 850551, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2916815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.332270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 850551, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2916815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.332288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 850549, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2906816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.332306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 850579, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3349552, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.332318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 850551, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.2916815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.332329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 850579, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3349552, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.332341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 850589, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3376822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.332360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 850579, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3349552, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.332378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 850589, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3376822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.332397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 850589, 'dev': 135, 'nlink': 1, 'atime': 1759003368.0, 'mtime': 1759003368.0, 'ctime': 1759006972.3376822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-27 21:54:32.332409 | orchestrator | 2025-09-27 21:54:32.332422 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-09-27 21:54:32.332434 | orchestrator | Saturday 27 September 2025 21:53:09 +0000 (0:00:38.078) 0:00:52.004 **** 2025-09-27 21:54:32.332446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-27 21:54:32.332457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-27 21:54:32.332469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-27 21:54:32.332480 | orchestrator | 2025-09-27 21:54:32.332491 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-09-27 21:54:32.332502 | orchestrator | Saturday 27 September 2025 21:53:10 +0000 (0:00:00.960) 0:00:52.965 **** 2025-09-27 21:54:32.332514 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:54:32.332525 | orchestrator | 2025-09-27 21:54:32.332536 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-09-27 21:54:32.332546 | orchestrator | Saturday 27 September 2025 21:53:12 +0000 (0:00:02.343) 0:00:55.309 **** 2025-09-27 21:54:32.332557 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:54:32.332568 | orchestrator | 2025-09-27 21:54:32.332578 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-27 21:54:32.332589 | orchestrator | Saturday 27 September 2025 21:53:14 +0000 (0:00:02.523) 0:00:57.832 **** 2025-09-27 21:54:32.332600 | orchestrator | 2025-09-27 21:54:32.332615 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-27 21:54:32.332652 | orchestrator | Saturday 27 September 2025 21:53:15 +0000 (0:00:00.096) 0:00:57.929 **** 2025-09-27 21:54:32.332663 | orchestrator | 2025-09-27 21:54:32.332805 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-27 21:54:32.332822 | orchestrator | Saturday 27 September 2025 21:53:15 +0000 (0:00:00.071) 0:00:58.000 **** 2025-09-27 21:54:32.332833 | orchestrator | 2025-09-27 21:54:32.332844 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-09-27 21:54:32.332855 | orchestrator | Saturday 27 September 2025 21:53:15 +0000 (0:00:00.256) 0:00:58.256 **** 2025-09-27 21:54:32.332865 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:54:32.332876 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:54:32.332887 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:54:32.332898 | orchestrator | 2025-09-27 21:54:32.332908 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-09-27 21:54:32.332919 | orchestrator | Saturday 27 September 2025 21:53:17 +0000 (0:00:01.905) 0:01:00.161 **** 2025-09-27 21:54:32.332930 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:54:32.332941 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:54:32.332951 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-09-27 21:54:32.332963 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-09-27 21:54:32.332974 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-09-27 21:54:32.332984 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:54:32.332996 | orchestrator | 2025-09-27 21:54:32.333007 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-09-27 21:54:32.333018 | orchestrator | Saturday 27 September 2025 21:53:56 +0000 (0:00:39.393) 0:01:39.555 **** 2025-09-27 21:54:32.333028 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:54:32.333039 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:54:32.333050 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:54:32.333061 | orchestrator | 2025-09-27 21:54:32.333071 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-09-27 21:54:32.333082 | orchestrator | Saturday 27 September 2025 21:54:24 +0000 (0:00:27.556) 0:02:07.112 **** 2025-09-27 21:54:32.333093 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:54:32.333104 | orchestrator | 2025-09-27 21:54:32.333114 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-09-27 21:54:32.333125 | orchestrator | Saturday 27 September 2025 21:54:26 +0000 (0:00:02.436) 0:02:09.549 **** 2025-09-27 21:54:32.333136 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:54:32.333147 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:54:32.333158 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:54:32.333168 | orchestrator | 2025-09-27 21:54:32.333179 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-09-27 21:54:32.333190 | orchestrator | Saturday 27 September 2025 21:54:27 +0000 (0:00:00.483) 0:02:10.032 **** 2025-09-27 21:54:32.333202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-09-27 21:54:32.333216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-09-27 21:54:32.333228 | orchestrator | 2025-09-27 21:54:32.333239 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-09-27 21:54:32.333250 | orchestrator | Saturday 27 September 2025 21:54:29 +0000 (0:00:02.133) 0:02:12.165 **** 2025-09-27 21:54:32.333269 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:54:32.333280 | orchestrator | 2025-09-27 21:54:32.333291 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:54:32.333302 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-27 21:54:32.333314 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-27 21:54:32.333325 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-27 21:54:32.333336 | orchestrator | 2025-09-27 21:54:32.333346 | orchestrator | 2025-09-27 21:54:32.333357 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:54:32.333368 | orchestrator | Saturday 27 September 2025 21:54:29 +0000 (0:00:00.261) 0:02:12.427 **** 2025-09-27 21:54:32.333378 | orchestrator | =============================================================================== 2025-09-27 21:54:32.333389 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 39.39s 2025-09-27 21:54:32.333400 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.08s 2025-09-27 21:54:32.333410 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 27.56s 2025-09-27 21:54:32.333421 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.52s 2025-09-27 21:54:32.333438 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.44s 2025-09-27 21:54:32.333450 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.34s 2025-09-27 21:54:32.333469 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.13s 2025-09-27 21:54:32.333482 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.91s 2025-09-27 21:54:32.333494 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.38s 2025-09-27 21:54:32.333506 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.32s 2025-09-27 21:54:32.333518 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.32s 2025-09-27 21:54:32.333530 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.29s 2025-09-27 21:54:32.333542 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.26s 2025-09-27 21:54:32.333553 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.96s 2025-09-27 21:54:32.333565 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.86s 2025-09-27 21:54:32.333577 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.79s 2025-09-27 21:54:32.333589 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.74s 2025-09-27 21:54:32.333601 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.73s 2025-09-27 21:54:32.333612 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.73s 2025-09-27 21:54:32.333624 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.70s 2025-09-27 21:54:32.333657 | orchestrator | 2025-09-27 21:54:32 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:54:32.333670 | orchestrator | 2025-09-27 21:54:32 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:54:32.333683 | orchestrator | 2025-09-27 21:54:32 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:54:35.376945 | orchestrator | 2025-09-27 21:54:35 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:54:35.377700 | orchestrator | 2025-09-27 21:54:35 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:54:35.377761 | orchestrator | 2025-09-27 21:54:35 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:54:38.416472 | orchestrator | 2025-09-27 21:54:38 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:54:38.417311 | orchestrator | 2025-09-27 21:54:38 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:54:38.417342 | orchestrator | 2025-09-27 21:54:38 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:54:41.468791 | orchestrator | 2025-09-27 21:54:41 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:54:41.469735 | orchestrator | 2025-09-27 21:54:41 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:54:41.470095 | orchestrator | 2025-09-27 21:54:41 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:54:44.507773 | orchestrator | 2025-09-27 21:54:44 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:54:44.509978 | orchestrator | 2025-09-27 21:54:44 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state STARTED 2025-09-27 21:54:44.510072 | orchestrator | 2025-09-27 21:54:44 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:54:47.551387 | orchestrator | 2025-09-27 21:54:47 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:54:47.552075 | orchestrator | 2025-09-27 21:54:47 | INFO  | Task 5caee0f2-2ba9-4937-b22d-6a655bdff1db is in state SUCCESS 2025-09-27 21:54:47.552546 | orchestrator | 2025-09-27 21:54:47 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:54:50.600430 | orchestrator | 2025-09-27 21:54:50 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:54:50.600552 | orchestrator | 2025-09-27 21:54:50 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:54:53.642000 | orchestrator | 2025-09-27 21:54:53 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:54:53.642324 | orchestrator | 2025-09-27 21:54:53 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:54:56.688642 | orchestrator | 2025-09-27 21:54:56 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:54:56.688776 | orchestrator | 2025-09-27 21:54:56 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:54:59.722144 | orchestrator | 2025-09-27 21:54:59 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:54:59.722254 | orchestrator | 2025-09-27 21:54:59 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:55:02.792789 | orchestrator | 2025-09-27 21:55:02 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:55:02.792886 | orchestrator | 2025-09-27 21:55:02 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:55:05.804554 | orchestrator | 2025-09-27 21:55:05 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:55:05.804678 | orchestrator | 2025-09-27 21:55:05 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:55:08.844585 | orchestrator | 2025-09-27 21:55:08 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:55:08.844660 | orchestrator | 2025-09-27 21:55:08 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:55:11.876777 | orchestrator | 2025-09-27 21:55:11 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:55:11.877548 | orchestrator | 2025-09-27 21:55:11 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:55:14.918899 | orchestrator | 2025-09-27 21:55:14 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:55:14.919032 | orchestrator | 2025-09-27 21:55:14 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:55:17.958108 | orchestrator | 2025-09-27 21:55:17 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:55:17.958202 | orchestrator | 2025-09-27 21:55:17 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:55:20.994244 | orchestrator | 2025-09-27 21:55:20 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:55:20.994382 | orchestrator | 2025-09-27 21:55:20 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:55:24.042546 | orchestrator | 2025-09-27 21:55:24 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:55:24.042660 | orchestrator | 2025-09-27 21:55:24 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:55:27.099921 | orchestrator | 2025-09-27 21:55:27 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:55:27.100028 | orchestrator | 2025-09-27 21:55:27 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:55:30.141230 | orchestrator | 2025-09-27 21:55:30 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:55:30.141349 | orchestrator | 2025-09-27 21:55:30 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:55:33.177031 | orchestrator | 2025-09-27 21:55:33 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:55:33.177125 | orchestrator | 2025-09-27 21:55:33 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:55:36.216894 | orchestrator | 2025-09-27 21:55:36 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:55:36.217193 | orchestrator | 2025-09-27 21:55:36 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:55:39.251371 | orchestrator | 2025-09-27 21:55:39 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:55:39.281117 | orchestrator | 2025-09-27 21:55:39 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:55:42.316976 | orchestrator | 2025-09-27 21:55:42 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:55:42.317070 | orchestrator | 2025-09-27 21:55:42 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:55:45.350595 | orchestrator | 2025-09-27 21:55:45 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:55:45.350670 | orchestrator | 2025-09-27 21:55:45 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:55:48.386721 | orchestrator | 2025-09-27 21:55:48 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:55:48.386960 | orchestrator | 2025-09-27 21:55:48 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:55:51.414655 | orchestrator | 2025-09-27 21:55:51 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:55:51.414825 | orchestrator | 2025-09-27 21:55:51 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:55:54.443568 | orchestrator | 2025-09-27 21:55:54 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:55:54.443682 | orchestrator | 2025-09-27 21:55:54 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:55:57.489104 | orchestrator | 2025-09-27 21:55:57 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:55:57.489196 | orchestrator | 2025-09-27 21:55:57 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:56:00.528258 | orchestrator | 2025-09-27 21:56:00 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:56:00.528391 | orchestrator | 2025-09-27 21:56:00 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:56:03.563035 | orchestrator | 2025-09-27 21:56:03 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:56:03.563133 | orchestrator | 2025-09-27 21:56:03 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:56:06.604940 | orchestrator | 2025-09-27 21:56:06 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:56:06.605115 | orchestrator | 2025-09-27 21:56:06 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:56:09.654564 | orchestrator | 2025-09-27 21:56:09 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:56:09.654676 | orchestrator | 2025-09-27 21:56:09 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:56:12.705377 | orchestrator | 2025-09-27 21:56:12 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:56:12.705485 | orchestrator | 2025-09-27 21:56:12 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:56:15.749903 | orchestrator | 2025-09-27 21:56:15 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:56:15.750007 | orchestrator | 2025-09-27 21:56:15 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:56:18.795106 | orchestrator | 2025-09-27 21:56:18 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:56:18.795250 | orchestrator | 2025-09-27 21:56:18 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:56:21.837206 | orchestrator | 2025-09-27 21:56:21 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:56:21.837310 | orchestrator | 2025-09-27 21:56:21 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:56:24.886196 | orchestrator | 2025-09-27 21:56:24 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:56:24.886278 | orchestrator | 2025-09-27 21:56:24 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:56:27.927797 | orchestrator | 2025-09-27 21:56:27 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:56:27.927990 | orchestrator | 2025-09-27 21:56:27 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:56:30.968281 | orchestrator | 2025-09-27 21:56:30 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:56:30.968395 | orchestrator | 2025-09-27 21:56:30 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:56:34.010383 | orchestrator | 2025-09-27 21:56:34 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:56:34.010485 | orchestrator | 2025-09-27 21:56:34 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:56:37.052275 | orchestrator | 2025-09-27 21:56:37 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:56:37.052386 | orchestrator | 2025-09-27 21:56:37 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:56:40.089051 | orchestrator | 2025-09-27 21:56:40 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:56:40.089168 | orchestrator | 2025-09-27 21:56:40 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:56:43.131161 | orchestrator | 2025-09-27 21:56:43 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:56:43.131276 | orchestrator | 2025-09-27 21:56:43 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:56:46.179431 | orchestrator | 2025-09-27 21:56:46 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:56:46.179558 | orchestrator | 2025-09-27 21:56:46 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:56:49.216396 | orchestrator | 2025-09-27 21:56:49 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:56:49.216504 | orchestrator | 2025-09-27 21:56:49 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:56:52.259488 | orchestrator | 2025-09-27 21:56:52 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:56:52.259596 | orchestrator | 2025-09-27 21:56:52 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:56:55.294524 | orchestrator | 2025-09-27 21:56:55 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:56:55.294616 | orchestrator | 2025-09-27 21:56:55 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:56:58.349702 | orchestrator | 2025-09-27 21:56:58 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:56:58.349819 | orchestrator | 2025-09-27 21:56:58 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:57:01.387020 | orchestrator | 2025-09-27 21:57:01 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:57:01.387119 | orchestrator | 2025-09-27 21:57:01 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:57:04.432405 | orchestrator | 2025-09-27 21:57:04 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:57:04.432515 | orchestrator | 2025-09-27 21:57:04 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:57:07.472819 | orchestrator | 2025-09-27 21:57:07 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:57:07.472988 | orchestrator | 2025-09-27 21:57:07 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:57:10.520126 | orchestrator | 2025-09-27 21:57:10 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:57:10.520239 | orchestrator | 2025-09-27 21:57:10 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:57:13.581788 | orchestrator | 2025-09-27 21:57:13 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:57:13.581960 | orchestrator | 2025-09-27 21:57:13 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:57:16.631965 | orchestrator | 2025-09-27 21:57:16 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:57:16.632069 | orchestrator | 2025-09-27 21:57:16 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:57:19.682014 | orchestrator | 2025-09-27 21:57:19 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:57:19.682225 | orchestrator | 2025-09-27 21:57:19 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:57:22.723072 | orchestrator | 2025-09-27 21:57:22 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:57:22.723187 | orchestrator | 2025-09-27 21:57:22 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:57:25.773045 | orchestrator | 2025-09-27 21:57:25 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:57:25.773142 | orchestrator | 2025-09-27 21:57:25 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:57:28.827754 | orchestrator | 2025-09-27 21:57:28 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:57:28.827854 | orchestrator | 2025-09-27 21:57:28 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:57:31.905152 | orchestrator | 2025-09-27 21:57:31 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:57:31.905261 | orchestrator | 2025-09-27 21:57:31 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:57:34.963780 | orchestrator | 2025-09-27 21:57:34 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:57:34.963882 | orchestrator | 2025-09-27 21:57:34 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:57:38.034438 | orchestrator | 2025-09-27 21:57:38 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:57:38.034551 | orchestrator | 2025-09-27 21:57:38 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:57:41.087474 | orchestrator | 2025-09-27 21:57:41 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:57:41.087556 | orchestrator | 2025-09-27 21:57:41 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:57:44.132914 | orchestrator | 2025-09-27 21:57:44 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:57:44.134207 | orchestrator | 2025-09-27 21:57:44 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:57:47.186681 | orchestrator | 2025-09-27 21:57:47 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:57:47.186766 | orchestrator | 2025-09-27 21:57:47 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:57:50.235534 | orchestrator | 2025-09-27 21:57:50 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:57:50.235652 | orchestrator | 2025-09-27 21:57:50 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:57:53.279894 | orchestrator | 2025-09-27 21:57:53 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:57:53.280065 | orchestrator | 2025-09-27 21:57:53 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:57:56.324341 | orchestrator | 2025-09-27 21:57:56 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:57:56.325711 | orchestrator | 2025-09-27 21:57:56 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:57:59.356434 | orchestrator | 2025-09-27 21:57:59 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:57:59.356535 | orchestrator | 2025-09-27 21:57:59 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:58:02.407264 | orchestrator | 2025-09-27 21:58:02 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:58:02.407398 | orchestrator | 2025-09-27 21:58:02 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:58:05.457346 | orchestrator | 2025-09-27 21:58:05 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:58:05.457452 | orchestrator | 2025-09-27 21:58:05 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:58:08.509556 | orchestrator | 2025-09-27 21:58:08 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:58:08.509674 | orchestrator | 2025-09-27 21:58:08 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:58:11.562400 | orchestrator | 2025-09-27 21:58:11 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:58:11.562514 | orchestrator | 2025-09-27 21:58:11 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:58:14.608637 | orchestrator | 2025-09-27 21:58:14 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:58:14.608748 | orchestrator | 2025-09-27 21:58:14 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:58:17.661406 | orchestrator | 2025-09-27 21:58:17 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:58:17.661512 | orchestrator | 2025-09-27 21:58:17 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:58:20.710104 | orchestrator | 2025-09-27 21:58:20 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:58:20.710217 | orchestrator | 2025-09-27 21:58:20 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:58:23.748090 | orchestrator | 2025-09-27 21:58:23 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:58:23.748163 | orchestrator | 2025-09-27 21:58:23 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:58:26.774322 | orchestrator | 2025-09-27 21:58:26 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:58:26.774418 | orchestrator | 2025-09-27 21:58:26 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:58:29.810554 | orchestrator | 2025-09-27 21:58:29 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:58:29.810603 | orchestrator | 2025-09-27 21:58:29 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:58:32.851475 | orchestrator | 2025-09-27 21:58:32 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:58:32.851585 | orchestrator | 2025-09-27 21:58:32 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:58:35.901411 | orchestrator | 2025-09-27 21:58:35 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:58:35.901498 | orchestrator | 2025-09-27 21:58:35 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:58:38.945606 | orchestrator | 2025-09-27 21:58:38 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:58:38.945739 | orchestrator | 2025-09-27 21:58:38 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:58:41.991238 | orchestrator | 2025-09-27 21:58:41 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:58:41.991306 | orchestrator | 2025-09-27 21:58:41 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:58:45.039663 | orchestrator | 2025-09-27 21:58:45 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:58:45.039776 | orchestrator | 2025-09-27 21:58:45 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:58:48.092067 | orchestrator | 2025-09-27 21:58:48 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:58:48.092163 | orchestrator | 2025-09-27 21:58:48 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:58:51.145452 | orchestrator | 2025-09-27 21:58:51 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:58:51.145566 | orchestrator | 2025-09-27 21:58:51 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:58:54.186358 | orchestrator | 2025-09-27 21:58:54 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:58:54.186449 | orchestrator | 2025-09-27 21:58:54 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:58:57.229913 | orchestrator | 2025-09-27 21:58:57 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:58:57.230195 | orchestrator | 2025-09-27 21:58:57 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:59:00.267235 | orchestrator | 2025-09-27 21:59:00 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:59:00.267340 | orchestrator | 2025-09-27 21:59:00 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:59:03.311697 | orchestrator | 2025-09-27 21:59:03 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:59:03.311818 | orchestrator | 2025-09-27 21:59:03 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:59:06.360508 | orchestrator | 2025-09-27 21:59:06 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:59:06.360603 | orchestrator | 2025-09-27 21:59:06 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:59:09.391234 | orchestrator | 2025-09-27 21:59:09 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state STARTED 2025-09-27 21:59:09.391344 | orchestrator | 2025-09-27 21:59:09 | INFO  | Wait 1 second(s) until the next check 2025-09-27 21:59:12.441197 | orchestrator | 2025-09-27 21:59:12 | INFO  | Task dee13578-a0b9-43f0-85b9-0a18fc5d123b is in state SUCCESS 2025-09-27 21:59:12.443005 | orchestrator | 2025-09-27 21:59:12.443106 | orchestrator | 2025-09-27 21:59:12.443121 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 21:59:12.443132 | orchestrator | 2025-09-27 21:59:12.443142 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 21:59:12.443153 | orchestrator | Saturday 27 September 2025 21:50:53 +0000 (0:00:00.169) 0:00:00.169 **** 2025-09-27 21:59:12.443163 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:59:12.443174 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:59:12.443184 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:59:12.443194 | orchestrator | 2025-09-27 21:59:12.443203 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 21:59:12.443213 | orchestrator | Saturday 27 September 2025 21:50:53 +0000 (0:00:00.255) 0:00:00.424 **** 2025-09-27 21:59:12.443223 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-09-27 21:59:12.443233 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-09-27 21:59:12.443243 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-09-27 21:59:12.443253 | orchestrator | 2025-09-27 21:59:12.443262 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-09-27 21:59:12.443275 | orchestrator | 2025-09-27 21:59:12.443292 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-09-27 21:59:12.443310 | orchestrator | Saturday 27 September 2025 21:50:53 +0000 (0:00:00.500) 0:00:00.924 **** 2025-09-27 21:59:12.443326 | orchestrator | 2025-09-27 21:59:12.443342 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-09-27 21:59:12.443358 | orchestrator | 2025-09-27 21:59:12.443374 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-09-27 21:59:12.443389 | orchestrator | 2025-09-27 21:59:12.443405 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-09-27 21:59:12.443422 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:59:12.443438 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:59:12.443454 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:59:12.443470 | orchestrator | 2025-09-27 21:59:12.443486 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:59:12.443503 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:59:12.443522 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:59:12.443539 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:59:12.443555 | orchestrator | 2025-09-27 21:59:12.443572 | orchestrator | 2025-09-27 21:59:12.443584 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:59:12.443595 | orchestrator | Saturday 27 September 2025 21:54:46 +0000 (0:03:52.927) 0:03:53.852 **** 2025-09-27 21:59:12.443607 | orchestrator | =============================================================================== 2025-09-27 21:59:12.443643 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 232.93s 2025-09-27 21:59:12.443657 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.50s 2025-09-27 21:59:12.443674 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.26s 2025-09-27 21:59:12.443690 | orchestrator | 2025-09-27 21:59:12.443706 | orchestrator | 2025-09-27 21:59:12.443723 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-27 21:59:12.443740 | orchestrator | 2025-09-27 21:59:12.443757 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-09-27 21:59:12.443773 | orchestrator | Saturday 27 September 2025 21:50:30 +0000 (0:00:00.273) 0:00:00.273 **** 2025-09-27 21:59:12.443791 | orchestrator | changed: [testbed-manager] 2025-09-27 21:59:12.443810 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:59:12.444120 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:59:12.444145 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:59:12.444154 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:59:12.444164 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:59:12.444181 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:59:12.444197 | orchestrator | 2025-09-27 21:59:12.444213 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-27 21:59:12.444230 | orchestrator | Saturday 27 September 2025 21:50:31 +0000 (0:00:00.868) 0:00:01.141 **** 2025-09-27 21:59:12.444246 | orchestrator | changed: [testbed-manager] 2025-09-27 21:59:12.444262 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:59:12.444278 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:59:12.444295 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:59:12.444312 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:59:12.444328 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:59:12.444344 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:59:12.444357 | orchestrator | 2025-09-27 21:59:12.444367 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-27 21:59:12.444377 | orchestrator | Saturday 27 September 2025 21:50:32 +0000 (0:00:00.851) 0:00:01.993 **** 2025-09-27 21:59:12.444386 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-09-27 21:59:12.444398 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-09-27 21:59:12.444414 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-09-27 21:59:12.444430 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-09-27 21:59:12.444446 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-09-27 21:59:12.444461 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-09-27 21:59:12.444477 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-09-27 21:59:12.444493 | orchestrator | 2025-09-27 21:59:12.444508 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-09-27 21:59:12.444525 | orchestrator | 2025-09-27 21:59:12.444540 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-27 21:59:12.444556 | orchestrator | Saturday 27 September 2025 21:50:33 +0000 (0:00:00.976) 0:00:02.969 **** 2025-09-27 21:59:12.444599 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:59:12.444618 | orchestrator | 2025-09-27 21:59:12.444635 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-09-27 21:59:12.444737 | orchestrator | Saturday 27 September 2025 21:50:34 +0000 (0:00:00.898) 0:00:03.868 **** 2025-09-27 21:59:12.444758 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-09-27 21:59:12.444776 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-09-27 21:59:12.444793 | orchestrator | 2025-09-27 21:59:12.444809 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-09-27 21:59:12.444827 | orchestrator | Saturday 27 September 2025 21:50:38 +0000 (0:00:04.448) 0:00:08.317 **** 2025-09-27 21:59:12.444845 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-27 21:59:12.444881 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-27 21:59:12.444898 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:59:12.444914 | orchestrator | 2025-09-27 21:59:12.444931 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-27 21:59:12.444948 | orchestrator | Saturday 27 September 2025 21:50:43 +0000 (0:00:04.730) 0:00:13.047 **** 2025-09-27 21:59:12.444965 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:59:12.444981 | orchestrator | 2025-09-27 21:59:12.445000 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-09-27 21:59:12.445018 | orchestrator | Saturday 27 September 2025 21:50:44 +0000 (0:00:00.700) 0:00:13.748 **** 2025-09-27 21:59:12.445036 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:59:12.445054 | orchestrator | 2025-09-27 21:59:12.445101 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-09-27 21:59:12.445118 | orchestrator | Saturday 27 September 2025 21:50:45 +0000 (0:00:01.567) 0:00:15.316 **** 2025-09-27 21:59:12.445135 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:59:12.445145 | orchestrator | 2025-09-27 21:59:12.445155 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-27 21:59:12.445165 | orchestrator | Saturday 27 September 2025 21:50:48 +0000 (0:00:02.687) 0:00:18.004 **** 2025-09-27 21:59:12.445174 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.445184 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.445194 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.445203 | orchestrator | 2025-09-27 21:59:12.445213 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-27 21:59:12.445223 | orchestrator | Saturday 27 September 2025 21:50:48 +0000 (0:00:00.417) 0:00:18.422 **** 2025-09-27 21:59:12.445232 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:59:12.445242 | orchestrator | 2025-09-27 21:59:12.445251 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-09-27 21:59:12.445261 | orchestrator | Saturday 27 September 2025 21:51:21 +0000 (0:00:33.019) 0:00:51.441 **** 2025-09-27 21:59:12.445271 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:59:12.445280 | orchestrator | 2025-09-27 21:59:12.445290 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-27 21:59:12.445299 | orchestrator | Saturday 27 September 2025 21:51:38 +0000 (0:00:17.043) 0:01:08.484 **** 2025-09-27 21:59:12.445309 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:59:12.445319 | orchestrator | 2025-09-27 21:59:12.445328 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-27 21:59:12.445338 | orchestrator | Saturday 27 September 2025 21:51:52 +0000 (0:00:13.846) 0:01:22.330 **** 2025-09-27 21:59:12.445347 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:59:12.445357 | orchestrator | 2025-09-27 21:59:12.445367 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-09-27 21:59:12.445376 | orchestrator | Saturday 27 September 2025 21:51:53 +0000 (0:00:01.104) 0:01:23.435 **** 2025-09-27 21:59:12.445386 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.445395 | orchestrator | 2025-09-27 21:59:12.445405 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-27 21:59:12.445425 | orchestrator | Saturday 27 September 2025 21:51:54 +0000 (0:00:00.453) 0:01:23.889 **** 2025-09-27 21:59:12.445436 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:59:12.445446 | orchestrator | 2025-09-27 21:59:12.445456 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-27 21:59:12.445465 | orchestrator | Saturday 27 September 2025 21:51:54 +0000 (0:00:00.468) 0:01:24.357 **** 2025-09-27 21:59:12.445475 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:59:12.445485 | orchestrator | 2025-09-27 21:59:12.445494 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-27 21:59:12.445504 | orchestrator | Saturday 27 September 2025 21:52:14 +0000 (0:00:19.911) 0:01:44.269 **** 2025-09-27 21:59:12.445524 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.445533 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.445543 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.445552 | orchestrator | 2025-09-27 21:59:12.445562 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-09-27 21:59:12.445571 | orchestrator | 2025-09-27 21:59:12.445581 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-27 21:59:12.445590 | orchestrator | Saturday 27 September 2025 21:52:14 +0000 (0:00:00.401) 0:01:44.671 **** 2025-09-27 21:59:12.445600 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:59:12.445609 | orchestrator | 2025-09-27 21:59:12.445619 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-09-27 21:59:12.445628 | orchestrator | Saturday 27 September 2025 21:52:15 +0000 (0:00:00.928) 0:01:45.599 **** 2025-09-27 21:59:12.445638 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.445648 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.445657 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:59:12.445667 | orchestrator | 2025-09-27 21:59:12.445676 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-09-27 21:59:12.445686 | orchestrator | Saturday 27 September 2025 21:52:18 +0000 (0:00:02.284) 0:01:47.884 **** 2025-09-27 21:59:12.445696 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.445705 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.445728 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:59:12.445738 | orchestrator | 2025-09-27 21:59:12.445748 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-27 21:59:12.445758 | orchestrator | Saturday 27 September 2025 21:52:20 +0000 (0:00:02.263) 0:01:50.148 **** 2025-09-27 21:59:12.445767 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.445777 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.445786 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.445796 | orchestrator | 2025-09-27 21:59:12.445805 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-27 21:59:12.445815 | orchestrator | Saturday 27 September 2025 21:52:20 +0000 (0:00:00.322) 0:01:50.470 **** 2025-09-27 21:59:12.445825 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-27 21:59:12.445834 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.446199 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-27 21:59:12.446225 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.446243 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-27 21:59:12.446261 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-09-27 21:59:12.446277 | orchestrator | 2025-09-27 21:59:12.446293 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-27 21:59:12.446311 | orchestrator | Saturday 27 September 2025 21:52:29 +0000 (0:00:08.538) 0:01:59.009 **** 2025-09-27 21:59:12.446324 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.446334 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.446343 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.446353 | orchestrator | 2025-09-27 21:59:12.446363 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-27 21:59:12.446372 | orchestrator | Saturday 27 September 2025 21:52:29 +0000 (0:00:00.347) 0:01:59.357 **** 2025-09-27 21:59:12.446382 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-27 21:59:12.446392 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.446401 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-27 21:59:12.446411 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.446421 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-27 21:59:12.446430 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.446440 | orchestrator | 2025-09-27 21:59:12.446449 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-27 21:59:12.446459 | orchestrator | Saturday 27 September 2025 21:52:30 +0000 (0:00:00.640) 0:01:59.997 **** 2025-09-27 21:59:12.446480 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.446490 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.446500 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:59:12.446509 | orchestrator | 2025-09-27 21:59:12.446519 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-09-27 21:59:12.446528 | orchestrator | Saturday 27 September 2025 21:52:30 +0000 (0:00:00.492) 0:02:00.489 **** 2025-09-27 21:59:12.446538 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.446547 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.446557 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:59:12.446566 | orchestrator | 2025-09-27 21:59:12.446576 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-09-27 21:59:12.446586 | orchestrator | Saturday 27 September 2025 21:52:31 +0000 (0:00:00.968) 0:02:01.458 **** 2025-09-27 21:59:12.446595 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.446605 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.446614 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:59:12.446624 | orchestrator | 2025-09-27 21:59:12.446633 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-09-27 21:59:12.446643 | orchestrator | Saturday 27 September 2025 21:52:33 +0000 (0:00:02.192) 0:02:03.650 **** 2025-09-27 21:59:12.446653 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.446662 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.446672 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:59:12.446681 | orchestrator | 2025-09-27 21:59:12.446699 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-27 21:59:12.446710 | orchestrator | Saturday 27 September 2025 21:52:56 +0000 (0:00:22.269) 0:02:25.920 **** 2025-09-27 21:59:12.446719 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.446729 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.446739 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:59:12.446748 | orchestrator | 2025-09-27 21:59:12.446758 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-27 21:59:12.446767 | orchestrator | Saturday 27 September 2025 21:53:10 +0000 (0:00:13.972) 0:02:39.892 **** 2025-09-27 21:59:12.446777 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:59:12.446786 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.446796 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.446805 | orchestrator | 2025-09-27 21:59:12.446815 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-09-27 21:59:12.446824 | orchestrator | Saturday 27 September 2025 21:53:11 +0000 (0:00:01.074) 0:02:40.967 **** 2025-09-27 21:59:12.446834 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.446843 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.446853 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:59:12.446862 | orchestrator | 2025-09-27 21:59:12.446872 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-09-27 21:59:12.446881 | orchestrator | Saturday 27 September 2025 21:53:24 +0000 (0:00:13.396) 0:02:54.363 **** 2025-09-27 21:59:12.446891 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.446900 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.446910 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.446919 | orchestrator | 2025-09-27 21:59:12.446929 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-27 21:59:12.446938 | orchestrator | Saturday 27 September 2025 21:53:25 +0000 (0:00:01.016) 0:02:55.379 **** 2025-09-27 21:59:12.446948 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.446957 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.446967 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.446976 | orchestrator | 2025-09-27 21:59:12.446986 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-09-27 21:59:12.446996 | orchestrator | 2025-09-27 21:59:12.447018 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-27 21:59:12.447034 | orchestrator | Saturday 27 September 2025 21:53:26 +0000 (0:00:00.501) 0:02:55.880 **** 2025-09-27 21:59:12.447044 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:59:12.447055 | orchestrator | 2025-09-27 21:59:12.447126 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-09-27 21:59:12.447138 | orchestrator | Saturday 27 September 2025 21:53:26 +0000 (0:00:00.531) 0:02:56.412 **** 2025-09-27 21:59:12.447148 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-09-27 21:59:12.447158 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-09-27 21:59:12.447168 | orchestrator | 2025-09-27 21:59:12.447178 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-09-27 21:59:12.447187 | orchestrator | Saturday 27 September 2025 21:53:30 +0000 (0:00:03.360) 0:02:59.773 **** 2025-09-27 21:59:12.447197 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-09-27 21:59:12.447208 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-09-27 21:59:12.447218 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-09-27 21:59:12.447228 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-09-27 21:59:12.447238 | orchestrator | 2025-09-27 21:59:12.447247 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-09-27 21:59:12.447257 | orchestrator | Saturday 27 September 2025 21:53:37 +0000 (0:00:07.025) 0:03:06.798 **** 2025-09-27 21:59:12.447267 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-27 21:59:12.447276 | orchestrator | 2025-09-27 21:59:12.447286 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-09-27 21:59:12.447296 | orchestrator | Saturday 27 September 2025 21:53:40 +0000 (0:00:03.396) 0:03:10.194 **** 2025-09-27 21:59:12.447306 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-27 21:59:12.447315 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-09-27 21:59:12.447325 | orchestrator | 2025-09-27 21:59:12.447335 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-09-27 21:59:12.447344 | orchestrator | Saturday 27 September 2025 21:53:44 +0000 (0:00:04.124) 0:03:14.319 **** 2025-09-27 21:59:12.447354 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-27 21:59:12.447364 | orchestrator | 2025-09-27 21:59:12.447373 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-09-27 21:59:12.447383 | orchestrator | Saturday 27 September 2025 21:53:48 +0000 (0:00:03.601) 0:03:17.920 **** 2025-09-27 21:59:12.447392 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-09-27 21:59:12.447402 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-09-27 21:59:12.447411 | orchestrator | 2025-09-27 21:59:12.447421 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-27 21:59:12.447431 | orchestrator | Saturday 27 September 2025 21:53:56 +0000 (0:00:08.214) 0:03:26.135 **** 2025-09-27 21:59:12.447451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-27 21:59:12.447497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-27 21:59:12.447510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-27 21:59:12.447521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.447538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.447557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.447567 | orchestrator | 2025-09-27 21:59:12.447577 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-09-27 21:59:12.447587 | orchestrator | Saturday 27 September 2025 21:53:57 +0000 (0:00:01.263) 0:03:27.398 **** 2025-09-27 21:59:12.447599 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.447616 | orchestrator | 2025-09-27 21:59:12.447632 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-09-27 21:59:12.447647 | orchestrator | Saturday 27 September 2025 21:53:57 +0000 (0:00:00.149) 0:03:27.548 **** 2025-09-27 21:59:12.447661 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.447682 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.447696 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.447709 | orchestrator | 2025-09-27 21:59:12.447723 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-09-27 21:59:12.447737 | orchestrator | Saturday 27 September 2025 21:53:58 +0000 (0:00:00.285) 0:03:27.834 **** 2025-09-27 21:59:12.447753 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-27 21:59:12.447767 | orchestrator | 2025-09-27 21:59:12.447781 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-09-27 21:59:12.447793 | orchestrator | Saturday 27 September 2025 21:53:59 +0000 (0:00:00.884) 0:03:28.718 **** 2025-09-27 21:59:12.447807 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.447856 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.447865 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.447873 | orchestrator | 2025-09-27 21:59:12.447881 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-27 21:59:12.447889 | orchestrator | Saturday 27 September 2025 21:53:59 +0000 (0:00:00.308) 0:03:29.026 **** 2025-09-27 21:59:12.447897 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:59:12.447905 | orchestrator | 2025-09-27 21:59:12.447913 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-27 21:59:12.447921 | orchestrator | Saturday 27 September 2025 21:53:59 +0000 (0:00:00.552) 0:03:29.579 **** 2025-09-27 21:59:12.447930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-27 21:59:12.447956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-27 21:59:12.447974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-27 21:59:12.447983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.447992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.448000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.448020 | orchestrator | 2025-09-27 21:59:12.448028 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-27 21:59:12.448036 | orchestrator | Saturday 27 September 2025 21:54:02 +0000 (0:00:02.543) 0:03:32.122 **** 2025-09-27 21:59:12.448049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-27 21:59:12.448084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 21:59:12.448093 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.448102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-27 21:59:12.448110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 21:59:12.448125 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.448139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-27 21:59:12.448148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 21:59:12.448156 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.448164 | orchestrator | 2025-09-27 21:59:12.448172 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-27 21:59:12.448180 | orchestrator | Saturday 27 September 2025 21:54:03 +0000 (0:00:00.803) 0:03:32.926 **** 2025-09-27 21:59:12.448194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-27 21:59:12.448204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 21:59:12.448217 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.448229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-27 21:59:12.448238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 21:59:12.448246 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.448261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-27 21:59:12.448270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 21:59:12.448284 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.448292 | orchestrator | 2025-09-27 21:59:12.448299 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-09-27 21:59:12.448307 | orchestrator | Saturday 27 September 2025 21:54:03 +0000 (0:00:00.741) 0:03:33.667 **** 2025-09-27 21:59:12.448320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-27 21:59:12.448329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-27 21:59:12.448344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-27 21:59:12.448360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.448368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.448380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.448388 | orchestrator | 2025-09-27 21:59:12.448396 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-09-27 21:59:12.448404 | orchestrator | Saturday 27 September 2025 21:54:06 +0000 (0:00:02.459) 0:03:36.127 **** 2025-09-27 21:59:12.448421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-27 21:59:12.448430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-27 21:59:12.448447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-27 21:59:12.448456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.448464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.448479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.448487 | orchestrator | 2025-09-27 21:59:12.448495 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-09-27 21:59:12.448503 | orchestrator | Saturday 27 September 2025 21:54:11 +0000 (0:00:05.497) 0:03:41.624 **** 2025-09-27 21:59:12.448512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-27 21:59:12.448525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 21:59:12.448533 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.448546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-27 21:59:12.448561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 21:59:12.448569 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.448577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-27 21:59:12.448591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-27 21:59:12.448599 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.448607 | orchestrator | 2025-09-27 21:59:12.448615 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-09-27 21:59:12.448623 | orchestrator | Saturday 27 September 2025 21:54:12 +0000 (0:00:00.572) 0:03:42.196 **** 2025-09-27 21:59:12.448630 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:59:12.448638 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:59:12.448646 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:59:12.448654 | orchestrator | 2025-09-27 21:59:12.448662 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-09-27 21:59:12.448670 | orchestrator | Saturday 27 September 2025 21:54:14 +0000 (0:00:01.565) 0:03:43.762 **** 2025-09-27 21:59:12.448678 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.448686 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.448694 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.448701 | orchestrator | 2025-09-27 21:59:12.448715 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-09-27 21:59:12.448724 | orchestrator | Saturday 27 September 2025 21:54:14 +0000 (0:00:00.334) 0:03:44.097 **** 2025-09-27 21:59:12.448732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-27 21:59:12.448747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-27 21:59:12.448762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-27 21:59:12.448774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.448783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.448791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.448804 | orchestrator | 2025-09-27 21:59:12.448817 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-27 21:59:12.448825 | orchestrator | Saturday 27 September 2025 21:54:16 +0000 (0:00:02.320) 0:03:46.418 **** 2025-09-27 21:59:12.448833 | orchestrator | 2025-09-27 21:59:12.448841 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-27 21:59:12.448849 | orchestrator | Saturday 27 September 2025 21:54:16 +0000 (0:00:00.132) 0:03:46.550 **** 2025-09-27 21:59:12.448857 | orchestrator | 2025-09-27 21:59:12.448865 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-27 21:59:12.448873 | orchestrator | Saturday 27 September 2025 21:54:16 +0000 (0:00:00.126) 0:03:46.677 **** 2025-09-27 21:59:12.448881 | orchestrator | 2025-09-27 21:59:12.448889 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-09-27 21:59:12.448896 | orchestrator | Saturday 27 September 2025 21:54:17 +0000 (0:00:00.129) 0:03:46.807 **** 2025-09-27 21:59:12.448904 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:59:12.448912 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:59:12.448920 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:59:12.448928 | orchestrator | 2025-09-27 21:59:12.448936 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-09-27 21:59:12.448944 | orchestrator | Saturday 27 September 2025 21:54:38 +0000 (0:00:21.210) 0:04:08.017 **** 2025-09-27 21:59:12.448952 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:59:12.448960 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:59:12.448967 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:59:12.448975 | orchestrator | 2025-09-27 21:59:12.448983 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-09-27 21:59:12.448991 | orchestrator | 2025-09-27 21:59:12.448999 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-27 21:59:12.449007 | orchestrator | Saturday 27 September 2025 21:54:48 +0000 (0:00:10.550) 0:04:18.567 **** 2025-09-27 21:59:12.449015 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:59:12.449023 | orchestrator | 2025-09-27 21:59:12.449031 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-27 21:59:12.449039 | orchestrator | Saturday 27 September 2025 21:54:50 +0000 (0:00:01.141) 0:04:19.709 **** 2025-09-27 21:59:12.449047 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:59:12.449054 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:59:12.449078 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:59:12.449087 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.449095 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.449103 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.449110 | orchestrator | 2025-09-27 21:59:12.449118 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-09-27 21:59:12.449126 | orchestrator | Saturday 27 September 2025 21:54:50 +0000 (0:00:00.560) 0:04:20.269 **** 2025-09-27 21:59:12.449134 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.449142 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.449150 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.449158 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:59:12.449165 | orchestrator | 2025-09-27 21:59:12.449173 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-27 21:59:12.449181 | orchestrator | Saturday 27 September 2025 21:54:51 +0000 (0:00:00.997) 0:04:21.267 **** 2025-09-27 21:59:12.449189 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-09-27 21:59:12.449197 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-09-27 21:59:12.449205 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-09-27 21:59:12.449218 | orchestrator | 2025-09-27 21:59:12.449226 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-27 21:59:12.449238 | orchestrator | Saturday 27 September 2025 21:54:52 +0000 (0:00:00.675) 0:04:21.942 **** 2025-09-27 21:59:12.449246 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-09-27 21:59:12.449254 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-09-27 21:59:12.449262 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-09-27 21:59:12.449270 | orchestrator | 2025-09-27 21:59:12.449277 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-27 21:59:12.449285 | orchestrator | Saturday 27 September 2025 21:54:53 +0000 (0:00:01.244) 0:04:23.187 **** 2025-09-27 21:59:12.449293 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-09-27 21:59:12.449301 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:59:12.449309 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-09-27 21:59:12.449317 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:59:12.449325 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-09-27 21:59:12.449332 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:59:12.449340 | orchestrator | 2025-09-27 21:59:12.449348 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-09-27 21:59:12.449356 | orchestrator | Saturday 27 September 2025 21:54:54 +0000 (0:00:00.715) 0:04:23.902 **** 2025-09-27 21:59:12.449364 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-27 21:59:12.449372 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-27 21:59:12.449380 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.449388 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-27 21:59:12.449396 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-27 21:59:12.449404 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.449412 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-27 21:59:12.449420 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-27 21:59:12.449428 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.449440 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-27 21:59:12.449448 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-27 21:59:12.449456 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-27 21:59:12.449464 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-27 21:59:12.449472 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-27 21:59:12.449480 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-27 21:59:12.449488 | orchestrator | 2025-09-27 21:59:12.449496 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-09-27 21:59:12.449503 | orchestrator | Saturday 27 September 2025 21:54:55 +0000 (0:00:01.067) 0:04:24.969 **** 2025-09-27 21:59:12.449511 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.449519 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.449527 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.449535 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:59:12.449542 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:59:12.449550 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:59:12.449562 | orchestrator | 2025-09-27 21:59:12.449575 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-09-27 21:59:12.449590 | orchestrator | Saturday 27 September 2025 21:54:56 +0000 (0:00:01.402) 0:04:26.372 **** 2025-09-27 21:59:12.449605 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.449618 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.449640 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.449653 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:59:12.449666 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:59:12.449680 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:59:12.449695 | orchestrator | 2025-09-27 21:59:12.449711 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-27 21:59:12.449726 | orchestrator | Saturday 27 September 2025 21:54:58 +0000 (0:00:01.622) 0:04:27.995 **** 2025-09-27 21:59:12.449742 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-27 21:59:12.449766 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-27 21:59:12.449782 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-27 21:59:12.449807 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-27 21:59:12.449822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-27 21:59:12.449840 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-27 21:59:12.449849 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-27 21:59:12.449862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-27 21:59:12.449870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-27 21:59:12.449884 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.449894 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.449907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.450005 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.450060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.450095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.450105 | orchestrator | 2025-09-27 21:59:12.450113 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-27 21:59:12.450121 | orchestrator | Saturday 27 September 2025 21:55:00 +0000 (0:00:02.348) 0:04:30.344 **** 2025-09-27 21:59:12.450130 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-27 21:59:12.450140 | orchestrator | 2025-09-27 21:59:12.450148 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-27 21:59:12.450156 | orchestrator | Saturday 27 September 2025 21:55:01 +0000 (0:00:01.169) 0:04:31.513 **** 2025-09-27 21:59:12.450172 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-27 21:59:12.450190 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-27 21:59:12.450198 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-27 21:59:12.450210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-27 21:59:12.450219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-27 21:59:12.450273 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-27 21:59:12.450287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-27 21:59:12.450295 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-27 21:59:12.450304 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-27 21:59:12.450312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.450328 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.450336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.450352 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.450366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.450374 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.450382 | orchestrator | 2025-09-27 21:59:12.450390 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-27 21:59:12.450398 | orchestrator | Saturday 27 September 2025 21:55:05 +0000 (0:00:03.410) 0:04:34.923 **** 2025-09-27 21:59:12.450411 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-27 21:59:12.450420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-27 21:59:12.450442 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-27 21:59:12.450456 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-27 21:59:12.450465 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-27 21:59:12.450473 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-27 21:59:12.450481 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:59:12.450489 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:59:12.450501 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-27 21:59:12.450510 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-27 21:59:12.450529 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-27 21:59:12.450537 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:59:12.450546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-27 21:59:12.450554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-27 21:59:12.450563 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.450571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-27 21:59:12.450583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-27 21:59:12.450592 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.450600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-27 21:59:12.450620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-27 21:59:12.450629 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.450637 | orchestrator | 2025-09-27 21:59:12.450645 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-27 21:59:12.450654 | orchestrator | Saturday 27 September 2025 21:55:06 +0000 (0:00:01.605) 0:04:36.529 **** 2025-09-27 21:59:12.450662 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-27 21:59:12.450670 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-27 21:59:12.450683 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-27 21:59:12.450691 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:59:12.450699 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-27 21:59:12.450722 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-27 21:59:12.450731 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-27 21:59:12.450744 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:59:12.450758 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-27 21:59:12.450771 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-27 21:59:12.450792 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-27 21:59:12.450816 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:59:12.450829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-27 21:59:12.450851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-27 21:59:12.450868 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.450884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-27 21:59:12.450900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-27 21:59:12.450914 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.450928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-27 21:59:12.450974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-27 21:59:12.450999 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.451007 | orchestrator | 2025-09-27 21:59:12.451015 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-27 21:59:12.451023 | orchestrator | Saturday 27 September 2025 21:55:08 +0000 (0:00:02.077) 0:04:38.606 **** 2025-09-27 21:59:12.451031 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.451039 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.451047 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.451055 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-27 21:59:12.451080 | orchestrator | 2025-09-27 21:59:12.451089 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-09-27 21:59:12.451097 | orchestrator | Saturday 27 September 2025 21:55:09 +0000 (0:00:00.881) 0:04:39.487 **** 2025-09-27 21:59:12.451105 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-27 21:59:12.451113 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-27 21:59:12.451121 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-27 21:59:12.451128 | orchestrator | 2025-09-27 21:59:12.451136 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-09-27 21:59:12.451144 | orchestrator | Saturday 27 September 2025 21:55:10 +0000 (0:00:00.803) 0:04:40.291 **** 2025-09-27 21:59:12.451152 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-27 21:59:12.451159 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-27 21:59:12.451167 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-27 21:59:12.451175 | orchestrator | 2025-09-27 21:59:12.451183 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-09-27 21:59:12.451191 | orchestrator | Saturday 27 September 2025 21:55:11 +0000 (0:00:00.765) 0:04:41.057 **** 2025-09-27 21:59:12.451199 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:59:12.451207 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:59:12.451215 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:59:12.451223 | orchestrator | 2025-09-27 21:59:12.451236 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-09-27 21:59:12.451244 | orchestrator | Saturday 27 September 2025 21:55:11 +0000 (0:00:00.468) 0:04:41.526 **** 2025-09-27 21:59:12.451252 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:59:12.451260 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:59:12.451268 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:59:12.451276 | orchestrator | 2025-09-27 21:59:12.451284 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-09-27 21:59:12.451292 | orchestrator | Saturday 27 September 2025 21:55:12 +0000 (0:00:00.603) 0:04:42.129 **** 2025-09-27 21:59:12.451299 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-27 21:59:12.451307 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-27 21:59:12.451315 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-27 21:59:12.451323 | orchestrator | 2025-09-27 21:59:12.451331 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-09-27 21:59:12.451339 | orchestrator | Saturday 27 September 2025 21:55:13 +0000 (0:00:01.153) 0:04:43.283 **** 2025-09-27 21:59:12.451346 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-27 21:59:12.451354 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-27 21:59:12.451362 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-27 21:59:12.451370 | orchestrator | 2025-09-27 21:59:12.451378 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-09-27 21:59:12.451386 | orchestrator | Saturday 27 September 2025 21:55:14 +0000 (0:00:01.219) 0:04:44.503 **** 2025-09-27 21:59:12.451394 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-27 21:59:12.451408 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-27 21:59:12.451416 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-27 21:59:12.451423 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-09-27 21:59:12.451431 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-09-27 21:59:12.451439 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-09-27 21:59:12.451447 | orchestrator | 2025-09-27 21:59:12.451455 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-09-27 21:59:12.451462 | orchestrator | Saturday 27 September 2025 21:55:18 +0000 (0:00:03.761) 0:04:48.265 **** 2025-09-27 21:59:12.451470 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:59:12.451478 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:59:12.451486 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:59:12.451494 | orchestrator | 2025-09-27 21:59:12.451502 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-09-27 21:59:12.451509 | orchestrator | Saturday 27 September 2025 21:55:19 +0000 (0:00:00.514) 0:04:48.780 **** 2025-09-27 21:59:12.451517 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:59:12.451525 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:59:12.451533 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:59:12.451541 | orchestrator | 2025-09-27 21:59:12.451548 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-09-27 21:59:12.451556 | orchestrator | Saturday 27 September 2025 21:55:19 +0000 (0:00:00.310) 0:04:49.090 **** 2025-09-27 21:59:12.451564 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:59:12.451572 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:59:12.451580 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:59:12.451588 | orchestrator | 2025-09-27 21:59:12.451595 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-09-27 21:59:12.451603 | orchestrator | Saturday 27 September 2025 21:55:20 +0000 (0:00:01.184) 0:04:50.274 **** 2025-09-27 21:59:12.451611 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-27 21:59:12.451627 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-27 21:59:12.451636 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-27 21:59:12.451644 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-27 21:59:12.451652 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-27 21:59:12.451660 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-27 21:59:12.451667 | orchestrator | 2025-09-27 21:59:12.451675 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-09-27 21:59:12.451683 | orchestrator | Saturday 27 September 2025 21:55:23 +0000 (0:00:03.373) 0:04:53.648 **** 2025-09-27 21:59:12.451691 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-27 21:59:12.451699 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-27 21:59:12.451707 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-27 21:59:12.451715 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-27 21:59:12.451723 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:59:12.451730 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-27 21:59:12.451738 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:59:12.451746 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-27 21:59:12.451754 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:59:12.451761 | orchestrator | 2025-09-27 21:59:12.451769 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-09-27 21:59:12.451782 | orchestrator | Saturday 27 September 2025 21:55:27 +0000 (0:00:03.529) 0:04:57.178 **** 2025-09-27 21:59:12.451790 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:59:12.451798 | orchestrator | 2025-09-27 21:59:12.451809 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-09-27 21:59:12.451818 | orchestrator | Saturday 27 September 2025 21:55:27 +0000 (0:00:00.156) 0:04:57.335 **** 2025-09-27 21:59:12.451826 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:59:12.451833 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:59:12.451841 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:59:12.451849 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.451857 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.451864 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.451872 | orchestrator | 2025-09-27 21:59:12.451880 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-09-27 21:59:12.451888 | orchestrator | Saturday 27 September 2025 21:55:28 +0000 (0:00:00.570) 0:04:57.906 **** 2025-09-27 21:59:12.451896 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-27 21:59:12.451903 | orchestrator | 2025-09-27 21:59:12.451911 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-09-27 21:59:12.451919 | orchestrator | Saturday 27 September 2025 21:55:28 +0000 (0:00:00.699) 0:04:58.605 **** 2025-09-27 21:59:12.451927 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:59:12.451935 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:59:12.451942 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:59:12.451950 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.451958 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.451966 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.451973 | orchestrator | 2025-09-27 21:59:12.451981 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-09-27 21:59:12.451989 | orchestrator | Saturday 27 September 2025 21:55:29 +0000 (0:00:00.922) 0:04:59.528 **** 2025-09-27 21:59:12.451997 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-27 21:59:12.452010 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-27 21:59:12.452019 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-27 21:59:12.452037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-27 21:59:12.452046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-27 21:59:12.452055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-27 21:59:12.452077 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-27 21:59:12.452086 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-27 21:59:12.452097 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-27 21:59:12.452111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.452125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.452133 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.452142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.452150 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.452162 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.452176 | orchestrator | 2025-09-27 21:59:12.452184 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-09-27 21:59:12.452193 | orchestrator | Saturday 27 September 2025 21:55:33 +0000 (0:00:03.924) 0:05:03.452 **** 2025-09-27 21:59:12.452205 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-27 21:59:12.452214 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-27 21:59:12.452222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-27 21:59:12.452231 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-27 21:59:12.452243 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-27 21:59:12.452256 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-27 21:59:12.452269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-27 21:59:12.452277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-27 21:59:12.452285 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.452293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-27 21:59:12.452305 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.452319 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.452332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.452341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.452349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.452357 | orchestrator | 2025-09-27 21:59:12.452365 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-09-27 21:59:12.452373 | orchestrator | Saturday 27 September 2025 21:55:39 +0000 (0:00:06.148) 0:05:09.601 **** 2025-09-27 21:59:12.452381 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:59:12.452389 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:59:12.452397 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:59:12.452404 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.452412 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.452420 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.452428 | orchestrator | 2025-09-27 21:59:12.452441 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-09-27 21:59:12.452455 | orchestrator | Saturday 27 September 2025 21:55:41 +0000 (0:00:01.320) 0:05:10.922 **** 2025-09-27 21:59:12.452474 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-27 21:59:12.452489 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-27 21:59:12.452502 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-27 21:59:12.452515 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-27 21:59:12.452528 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.452543 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-27 21:59:12.452558 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-27 21:59:12.452574 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-27 21:59:12.452595 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-27 21:59:12.452610 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.452624 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-27 21:59:12.452638 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.452651 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-27 21:59:12.452665 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-27 21:59:12.452679 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-27 21:59:12.452692 | orchestrator | 2025-09-27 21:59:12.452700 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-09-27 21:59:12.452708 | orchestrator | Saturday 27 September 2025 21:55:44 +0000 (0:00:03.492) 0:05:14.414 **** 2025-09-27 21:59:12.452716 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:59:12.452724 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:59:12.452731 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:59:12.452739 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.452747 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.452755 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.452763 | orchestrator | 2025-09-27 21:59:12.452770 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-09-27 21:59:12.452778 | orchestrator | Saturday 27 September 2025 21:55:45 +0000 (0:00:00.592) 0:05:15.007 **** 2025-09-27 21:59:12.452786 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-27 21:59:12.452794 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-27 21:59:12.452802 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-27 21:59:12.452817 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-27 21:59:12.452825 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-27 21:59:12.452833 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-27 21:59:12.452841 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-27 21:59:12.452849 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-27 21:59:12.452856 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-27 21:59:12.452864 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-27 21:59:12.452879 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.452887 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-27 21:59:12.452895 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.452903 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-27 21:59:12.452911 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.452918 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-27 21:59:12.452926 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-27 21:59:12.452934 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-27 21:59:12.452942 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-27 21:59:12.452949 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-27 21:59:12.452957 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-27 21:59:12.452965 | orchestrator | 2025-09-27 21:59:12.452973 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-09-27 21:59:12.452981 | orchestrator | Saturday 27 September 2025 21:55:50 +0000 (0:00:05.169) 0:05:20.176 **** 2025-09-27 21:59:12.452989 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-27 21:59:12.452997 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-27 21:59:12.453005 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-27 21:59:12.453013 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-27 21:59:12.453021 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-27 21:59:12.453035 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-27 21:59:12.453043 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-27 21:59:12.453051 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-27 21:59:12.453059 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-27 21:59:12.453121 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-27 21:59:12.453130 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-27 21:59:12.453137 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-27 21:59:12.453145 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-27 21:59:12.453153 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.453161 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-27 21:59:12.453169 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.453177 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-27 21:59:12.453185 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.453193 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-27 21:59:12.453200 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-27 21:59:12.453208 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-27 21:59:12.453223 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-27 21:59:12.453231 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-27 21:59:12.453238 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-27 21:59:12.453252 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-27 21:59:12.453260 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-27 21:59:12.453268 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-27 21:59:12.453276 | orchestrator | 2025-09-27 21:59:12.453284 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-09-27 21:59:12.453291 | orchestrator | Saturday 27 September 2025 21:55:57 +0000 (0:00:06.961) 0:05:27.137 **** 2025-09-27 21:59:12.453299 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:59:12.453307 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:59:12.453315 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:59:12.453323 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.453331 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.453338 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.453346 | orchestrator | 2025-09-27 21:59:12.453354 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-09-27 21:59:12.453362 | orchestrator | Saturday 27 September 2025 21:55:58 +0000 (0:00:00.759) 0:05:27.897 **** 2025-09-27 21:59:12.453370 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:59:12.453378 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:59:12.453385 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:59:12.453393 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.453401 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.453409 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.453417 | orchestrator | 2025-09-27 21:59:12.453425 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-09-27 21:59:12.453433 | orchestrator | Saturday 27 September 2025 21:55:58 +0000 (0:00:00.568) 0:05:28.466 **** 2025-09-27 21:59:12.453440 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.453448 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.453457 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.453471 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:59:12.453485 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:59:12.453498 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:59:12.453512 | orchestrator | 2025-09-27 21:59:12.453526 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-09-27 21:59:12.453540 | orchestrator | Saturday 27 September 2025 21:56:00 +0000 (0:00:02.227) 0:05:30.693 **** 2025-09-27 21:59:12.453556 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-27 21:59:12.453578 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-27 21:59:12.453595 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-27 21:59:12.453611 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host2025-09-27 21:59:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 21:59:12.453620 | orchestrator | ', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-27 21:59:12.453628 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:59:12.453636 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-27 21:59:12.453645 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-27 21:59:12.453653 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:59:12.453665 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-27 21:59:12.453678 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-27 21:59:12.453690 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-27 21:59:12.453697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-27 21:59:12.453704 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:59:12.453765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-27 21:59:12.453773 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.453781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-27 21:59:12.453791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-27 21:59:12.453803 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.453810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-27 21:59:12.453817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-27 21:59:12.453824 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.453831 | orchestrator | 2025-09-27 21:59:12.453838 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-09-27 21:59:12.453844 | orchestrator | Saturday 27 September 2025 21:56:02 +0000 (0:00:01.359) 0:05:32.053 **** 2025-09-27 21:59:12.453851 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-27 21:59:12.453858 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-27 21:59:12.453865 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:59:12.453871 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-27 21:59:12.453878 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-27 21:59:12.453884 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:59:12.453891 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-27 21:59:12.453898 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-27 21:59:12.453904 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:59:12.453911 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-27 21:59:12.453917 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-27 21:59:12.453924 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.453931 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-27 21:59:12.453937 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-27 21:59:12.453944 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.453952 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-27 21:59:12.453969 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-27 21:59:12.453981 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.453992 | orchestrator | 2025-09-27 21:59:12.454005 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-09-27 21:59:12.454042 | orchestrator | Saturday 27 September 2025 21:56:03 +0000 (0:00:00.840) 0:05:32.894 **** 2025-09-27 21:59:12.454059 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-27 21:59:12.454116 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-27 21:59:12.454129 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-27 21:59:12.454141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-27 21:59:12.454154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-27 21:59:12.454173 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-27 21:59:12.454188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-27 21:59:12.454199 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-27 21:59:12.454207 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-27 21:59:12.454214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.454221 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.454232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.454243 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.454250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.454261 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-27 21:59:12.454268 | orchestrator | 2025-09-27 21:59:12.454274 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-27 21:59:12.454281 | orchestrator | Saturday 27 September 2025 21:56:06 +0000 (0:00:02.812) 0:05:35.707 **** 2025-09-27 21:59:12.454288 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:59:12.454295 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:59:12.454301 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:59:12.454308 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.454314 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.454321 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.454328 | orchestrator | 2025-09-27 21:59:12.454334 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-27 21:59:12.454341 | orchestrator | Saturday 27 September 2025 21:56:06 +0000 (0:00:00.727) 0:05:36.434 **** 2025-09-27 21:59:12.454347 | orchestrator | 2025-09-27 21:59:12.454354 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-27 21:59:12.454361 | orchestrator | Saturday 27 September 2025 21:56:06 +0000 (0:00:00.132) 0:05:36.566 **** 2025-09-27 21:59:12.454367 | orchestrator | 2025-09-27 21:59:12.454374 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-27 21:59:12.454380 | orchestrator | Saturday 27 September 2025 21:56:06 +0000 (0:00:00.132) 0:05:36.699 **** 2025-09-27 21:59:12.454387 | orchestrator | 2025-09-27 21:59:12.454393 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-27 21:59:12.454400 | orchestrator | Saturday 27 September 2025 21:56:07 +0000 (0:00:00.130) 0:05:36.830 **** 2025-09-27 21:59:12.454407 | orchestrator | 2025-09-27 21:59:12.454413 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-27 21:59:12.454424 | orchestrator | Saturday 27 September 2025 21:56:07 +0000 (0:00:00.127) 0:05:36.958 **** 2025-09-27 21:59:12.454431 | orchestrator | 2025-09-27 21:59:12.454438 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-27 21:59:12.454444 | orchestrator | Saturday 27 September 2025 21:56:07 +0000 (0:00:00.125) 0:05:37.083 **** 2025-09-27 21:59:12.454451 | orchestrator | 2025-09-27 21:59:12.454457 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-09-27 21:59:12.454464 | orchestrator | Saturday 27 September 2025 21:56:07 +0000 (0:00:00.287) 0:05:37.371 **** 2025-09-27 21:59:12.454471 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:59:12.454477 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:59:12.454484 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:59:12.454490 | orchestrator | 2025-09-27 21:59:12.454497 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-09-27 21:59:12.454504 | orchestrator | Saturday 27 September 2025 21:56:19 +0000 (0:00:11.813) 0:05:49.185 **** 2025-09-27 21:59:12.454514 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:59:12.454521 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:59:12.454527 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:59:12.454534 | orchestrator | 2025-09-27 21:59:12.454540 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-09-27 21:59:12.454547 | orchestrator | Saturday 27 September 2025 21:56:38 +0000 (0:00:18.650) 0:06:07.835 **** 2025-09-27 21:59:12.454554 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:59:12.454560 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:59:12.454567 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:59:12.454573 | orchestrator | 2025-09-27 21:59:12.454580 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-09-27 21:59:12.454587 | orchestrator | Saturday 27 September 2025 21:56:56 +0000 (0:00:18.083) 0:06:25.919 **** 2025-09-27 21:59:12.454593 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:59:12.454600 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:59:12.454606 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:59:12.454613 | orchestrator | 2025-09-27 21:59:12.454619 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-09-27 21:59:12.454626 | orchestrator | Saturday 27 September 2025 21:57:33 +0000 (0:00:37.669) 0:07:03.589 **** 2025-09-27 21:59:12.454633 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:59:12.454639 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:59:12.454646 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:59:12.454653 | orchestrator | 2025-09-27 21:59:12.454659 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-09-27 21:59:12.454666 | orchestrator | Saturday 27 September 2025 21:57:34 +0000 (0:00:00.943) 0:07:04.532 **** 2025-09-27 21:59:12.454673 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:59:12.454679 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:59:12.454686 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:59:12.454692 | orchestrator | 2025-09-27 21:59:12.454699 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-09-27 21:59:12.454706 | orchestrator | Saturday 27 September 2025 21:57:35 +0000 (0:00:00.861) 0:07:05.393 **** 2025-09-27 21:59:12.454712 | orchestrator | changed: [testbed-node-5] 2025-09-27 21:59:12.454719 | orchestrator | changed: [testbed-node-3] 2025-09-27 21:59:12.454725 | orchestrator | changed: [testbed-node-4] 2025-09-27 21:59:12.454732 | orchestrator | 2025-09-27 21:59:12.454745 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-09-27 21:59:12.454754 | orchestrator | Saturday 27 September 2025 21:57:55 +0000 (0:00:19.833) 0:07:25.226 **** 2025-09-27 21:59:12.454764 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:59:12.454775 | orchestrator | 2025-09-27 21:59:12.454787 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-09-27 21:59:12.454798 | orchestrator | Saturday 27 September 2025 21:57:55 +0000 (0:00:00.131) 0:07:25.358 **** 2025-09-27 21:59:12.454816 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:59:12.454823 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:59:12.454830 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.454836 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.454843 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.454850 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-09-27 21:59:12.454857 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-09-27 21:59:12.454863 | orchestrator | 2025-09-27 21:59:12.454870 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-09-27 21:59:12.454876 | orchestrator | Saturday 27 September 2025 21:58:19 +0000 (0:00:23.669) 0:07:49.028 **** 2025-09-27 21:59:12.454883 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.454889 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:59:12.454896 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:59:12.454902 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:59:12.454909 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.454915 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.454922 | orchestrator | 2025-09-27 21:59:12.454928 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-09-27 21:59:12.454935 | orchestrator | Saturday 27 September 2025 21:58:28 +0000 (0:00:09.379) 0:07:58.408 **** 2025-09-27 21:59:12.454942 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:59:12.454948 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.454955 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:59:12.454961 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.454968 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.454974 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2025-09-27 21:59:12.454981 | orchestrator | 2025-09-27 21:59:12.454987 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-27 21:59:12.454994 | orchestrator | Saturday 27 September 2025 21:58:32 +0000 (0:00:04.263) 0:08:02.671 **** 2025-09-27 21:59:12.455000 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-09-27 21:59:12.455007 | orchestrator | 2025-09-27 21:59:12.455014 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-27 21:59:12.455020 | orchestrator | Saturday 27 September 2025 21:58:47 +0000 (0:00:14.699) 0:08:17.371 **** 2025-09-27 21:59:12.455027 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-09-27 21:59:12.455033 | orchestrator | 2025-09-27 21:59:12.455040 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-09-27 21:59:12.455047 | orchestrator | Saturday 27 September 2025 21:58:49 +0000 (0:00:01.335) 0:08:18.706 **** 2025-09-27 21:59:12.455053 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:59:12.455060 | orchestrator | 2025-09-27 21:59:12.455082 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-09-27 21:59:12.455089 | orchestrator | Saturday 27 September 2025 21:58:50 +0000 (0:00:01.399) 0:08:20.105 **** 2025-09-27 21:59:12.455096 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-09-27 21:59:12.455102 | orchestrator | 2025-09-27 21:59:12.455109 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-09-27 21:59:12.455124 | orchestrator | Saturday 27 September 2025 21:59:03 +0000 (0:00:13.019) 0:08:33.125 **** 2025-09-27 21:59:12.455136 | orchestrator | ok: [testbed-node-3] 2025-09-27 21:59:12.455148 | orchestrator | ok: [testbed-node-4] 2025-09-27 21:59:12.455160 | orchestrator | ok: [testbed-node-5] 2025-09-27 21:59:12.455172 | orchestrator | ok: [testbed-node-0] 2025-09-27 21:59:12.455185 | orchestrator | ok: [testbed-node-1] 2025-09-27 21:59:12.455198 | orchestrator | ok: [testbed-node-2] 2025-09-27 21:59:12.455211 | orchestrator | 2025-09-27 21:59:12.455223 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-09-27 21:59:12.455244 | orchestrator | 2025-09-27 21:59:12.455256 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-09-27 21:59:12.455268 | orchestrator | Saturday 27 September 2025 21:59:05 +0000 (0:00:02.254) 0:08:35.379 **** 2025-09-27 21:59:12.455280 | orchestrator | changed: [testbed-node-0] 2025-09-27 21:59:12.455292 | orchestrator | changed: [testbed-node-1] 2025-09-27 21:59:12.455300 | orchestrator | changed: [testbed-node-2] 2025-09-27 21:59:12.455306 | orchestrator | 2025-09-27 21:59:12.455313 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-09-27 21:59:12.455320 | orchestrator | 2025-09-27 21:59:12.455327 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-09-27 21:59:12.455333 | orchestrator | Saturday 27 September 2025 21:59:06 +0000 (0:00:01.286) 0:08:36.666 **** 2025-09-27 21:59:12.455340 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.455346 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.455353 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.455359 | orchestrator | 2025-09-27 21:59:12.455366 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-09-27 21:59:12.455372 | orchestrator | 2025-09-27 21:59:12.455379 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-09-27 21:59:12.455386 | orchestrator | Saturday 27 September 2025 21:59:07 +0000 (0:00:00.523) 0:08:37.190 **** 2025-09-27 21:59:12.455392 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-09-27 21:59:12.455399 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-27 21:59:12.455405 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-27 21:59:12.455412 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-09-27 21:59:12.455419 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-09-27 21:59:12.455429 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-09-27 21:59:12.455436 | orchestrator | skipping: [testbed-node-3] 2025-09-27 21:59:12.455443 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-09-27 21:59:12.455449 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-27 21:59:12.455456 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-27 21:59:12.455462 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-09-27 21:59:12.455469 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-09-27 21:59:12.455475 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-09-27 21:59:12.455482 | orchestrator | skipping: [testbed-node-4] 2025-09-27 21:59:12.455489 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-09-27 21:59:12.455495 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-27 21:59:12.455502 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-27 21:59:12.455508 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-09-27 21:59:12.455515 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-09-27 21:59:12.455521 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-09-27 21:59:12.455528 | orchestrator | skipping: [testbed-node-5] 2025-09-27 21:59:12.455534 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-09-27 21:59:12.455541 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-27 21:59:12.455547 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-27 21:59:12.455554 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-09-27 21:59:12.455560 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-09-27 21:59:12.455567 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-09-27 21:59:12.455573 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.455580 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-09-27 21:59:12.455587 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-27 21:59:12.455599 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-27 21:59:12.455605 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-09-27 21:59:12.455612 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-09-27 21:59:12.455619 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-09-27 21:59:12.455625 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.455632 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-09-27 21:59:12.455638 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-27 21:59:12.455645 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-27 21:59:12.455651 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-09-27 21:59:12.455658 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-09-27 21:59:12.455665 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-09-27 21:59:12.455671 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.455678 | orchestrator | 2025-09-27 21:59:12.455684 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-09-27 21:59:12.455691 | orchestrator | 2025-09-27 21:59:12.455698 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-09-27 21:59:12.455704 | orchestrator | Saturday 27 September 2025 21:59:08 +0000 (0:00:01.371) 0:08:38.561 **** 2025-09-27 21:59:12.455711 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-09-27 21:59:12.455722 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-09-27 21:59:12.455729 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.455736 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-09-27 21:59:12.455742 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-09-27 21:59:12.455749 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.455755 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-09-27 21:59:12.455762 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-09-27 21:59:12.455768 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.455775 | orchestrator | 2025-09-27 21:59:12.455781 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-09-27 21:59:12.455788 | orchestrator | 2025-09-27 21:59:12.455795 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-09-27 21:59:12.455801 | orchestrator | Saturday 27 September 2025 21:59:09 +0000 (0:00:00.823) 0:08:39.385 **** 2025-09-27 21:59:12.455808 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.455814 | orchestrator | 2025-09-27 21:59:12.455821 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-09-27 21:59:12.455827 | orchestrator | 2025-09-27 21:59:12.455834 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-09-27 21:59:12.455841 | orchestrator | Saturday 27 September 2025 21:59:10 +0000 (0:00:00.688) 0:08:40.073 **** 2025-09-27 21:59:12.455847 | orchestrator | skipping: [testbed-node-0] 2025-09-27 21:59:12.455854 | orchestrator | skipping: [testbed-node-1] 2025-09-27 21:59:12.455860 | orchestrator | skipping: [testbed-node-2] 2025-09-27 21:59:12.455867 | orchestrator | 2025-09-27 21:59:12.455873 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 21:59:12.455880 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-27 21:59:12.455887 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-09-27 21:59:12.455895 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-27 21:59:12.455902 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-27 21:59:12.455914 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-27 21:59:12.455921 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-09-27 21:59:12.455927 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-27 21:59:12.455934 | orchestrator | 2025-09-27 21:59:12.455941 | orchestrator | 2025-09-27 21:59:12.455947 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 21:59:12.455954 | orchestrator | Saturday 27 September 2025 21:59:10 +0000 (0:00:00.482) 0:08:40.555 **** 2025-09-27 21:59:12.455960 | orchestrator | =============================================================================== 2025-09-27 21:59:12.455967 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 37.67s 2025-09-27 21:59:12.455974 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 33.02s 2025-09-27 21:59:12.455980 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 23.67s 2025-09-27 21:59:12.455987 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 22.27s 2025-09-27 21:59:12.455993 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 21.21s 2025-09-27 21:59:12.456000 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 19.91s 2025-09-27 21:59:12.456006 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 19.83s 2025-09-27 21:59:12.456013 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 18.65s 2025-09-27 21:59:12.456019 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 18.08s 2025-09-27 21:59:12.456026 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 17.04s 2025-09-27 21:59:12.456032 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.70s 2025-09-27 21:59:12.456039 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.97s 2025-09-27 21:59:12.456045 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.85s 2025-09-27 21:59:12.456052 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.40s 2025-09-27 21:59:12.456058 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 13.02s 2025-09-27 21:59:12.456112 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 11.81s 2025-09-27 21:59:12.456120 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.55s 2025-09-27 21:59:12.456127 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 9.38s 2025-09-27 21:59:12.456133 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.54s 2025-09-27 21:59:12.456140 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 8.21s 2025-09-27 21:59:15.489719 | orchestrator | 2025-09-27 21:59:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 21:59:18.537802 | orchestrator | 2025-09-27 21:59:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 21:59:21.573439 | orchestrator | 2025-09-27 21:59:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 21:59:24.617267 | orchestrator | 2025-09-27 21:59:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 21:59:27.660164 | orchestrator | 2025-09-27 21:59:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 21:59:30.701469 | orchestrator | 2025-09-27 21:59:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 21:59:33.745606 | orchestrator | 2025-09-27 21:59:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 21:59:36.789027 | orchestrator | 2025-09-27 21:59:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 21:59:39.828769 | orchestrator | 2025-09-27 21:59:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 21:59:42.864503 | orchestrator | 2025-09-27 21:59:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 21:59:45.894059 | orchestrator | 2025-09-27 21:59:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 21:59:48.928574 | orchestrator | 2025-09-27 21:59:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 21:59:51.968784 | orchestrator | 2025-09-27 21:59:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 21:59:55.012384 | orchestrator | 2025-09-27 21:59:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 21:59:58.044740 | orchestrator | 2025-09-27 21:59:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 22:00:01.078730 | orchestrator | 2025-09-27 22:00:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 22:00:04.128016 | orchestrator | 2025-09-27 22:00:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 22:00:07.172717 | orchestrator | 2025-09-27 22:00:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 22:00:10.215841 | orchestrator | 2025-09-27 22:00:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-27 22:00:13.258503 | orchestrator | 2025-09-27 22:00:13.613259 | orchestrator | 2025-09-27 22:00:13.618332 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sat Sep 27 22:00:13 UTC 2025 2025-09-27 22:00:13.618435 | orchestrator | 2025-09-27 22:00:13.906200 | orchestrator | ok: Runtime: 0:35:13.159262 2025-09-27 22:00:14.166770 | 2025-09-27 22:00:14.166943 | TASK [Bootstrap services] 2025-09-27 22:00:14.916695 | orchestrator | 2025-09-27 22:00:14.916948 | orchestrator | # BOOTSTRAP 2025-09-27 22:00:14.916986 | orchestrator | 2025-09-27 22:00:14.917009 | orchestrator | + set -e 2025-09-27 22:00:14.917032 | orchestrator | + echo 2025-09-27 22:00:14.917055 | orchestrator | + echo '# BOOTSTRAP' 2025-09-27 22:00:14.917083 | orchestrator | + echo 2025-09-27 22:00:14.917218 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-09-27 22:00:14.928619 | orchestrator | + set -e 2025-09-27 22:00:14.928702 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-09-27 22:00:20.266552 | orchestrator | 2025-09-27 22:00:20 | INFO  | It takes a moment until task 5e514a39-2504-4bcd-bd69-344ac9849346 (flavor-manager) has been started and output is visible here. 2025-09-27 22:00:29.376461 | orchestrator | 2025-09-27 22:00:23 | INFO  | Flavor SCS-1L-1 created 2025-09-27 22:00:29.376629 | orchestrator | 2025-09-27 22:00:24 | INFO  | Flavor SCS-1L-1-5 created 2025-09-27 22:00:29.376662 | orchestrator | 2025-09-27 22:00:24 | INFO  | Flavor SCS-1V-2 created 2025-09-27 22:00:29.376683 | orchestrator | 2025-09-27 22:00:24 | INFO  | Flavor SCS-1V-2-5 created 2025-09-27 22:00:29.376700 | orchestrator | 2025-09-27 22:00:25 | INFO  | Flavor SCS-1V-4 created 2025-09-27 22:00:29.376717 | orchestrator | 2025-09-27 22:00:25 | INFO  | Flavor SCS-1V-4-10 created 2025-09-27 22:00:29.376735 | orchestrator | 2025-09-27 22:00:25 | INFO  | Flavor SCS-1V-8 created 2025-09-27 22:00:29.376754 | orchestrator | 2025-09-27 22:00:25 | INFO  | Flavor SCS-1V-8-20 created 2025-09-27 22:00:29.376787 | orchestrator | 2025-09-27 22:00:25 | INFO  | Flavor SCS-2V-4 created 2025-09-27 22:00:29.376808 | orchestrator | 2025-09-27 22:00:25 | INFO  | Flavor SCS-2V-4-10 created 2025-09-27 22:00:29.376826 | orchestrator | 2025-09-27 22:00:26 | INFO  | Flavor SCS-2V-8 created 2025-09-27 22:00:29.376844 | orchestrator | 2025-09-27 22:00:26 | INFO  | Flavor SCS-2V-8-20 created 2025-09-27 22:00:29.376862 | orchestrator | 2025-09-27 22:00:26 | INFO  | Flavor SCS-2V-16 created 2025-09-27 22:00:29.376880 | orchestrator | 2025-09-27 22:00:26 | INFO  | Flavor SCS-2V-16-50 created 2025-09-27 22:00:29.376898 | orchestrator | 2025-09-27 22:00:26 | INFO  | Flavor SCS-4V-8 created 2025-09-27 22:00:29.376916 | orchestrator | 2025-09-27 22:00:26 | INFO  | Flavor SCS-4V-8-20 created 2025-09-27 22:00:29.376935 | orchestrator | 2025-09-27 22:00:26 | INFO  | Flavor SCS-4V-16 created 2025-09-27 22:00:29.376953 | orchestrator | 2025-09-27 22:00:27 | INFO  | Flavor SCS-4V-16-50 created 2025-09-27 22:00:29.376971 | orchestrator | 2025-09-27 22:00:27 | INFO  | Flavor SCS-4V-32 created 2025-09-27 22:00:29.376990 | orchestrator | 2025-09-27 22:00:27 | INFO  | Flavor SCS-4V-32-100 created 2025-09-27 22:00:29.377008 | orchestrator | 2025-09-27 22:00:27 | INFO  | Flavor SCS-8V-16 created 2025-09-27 22:00:29.377026 | orchestrator | 2025-09-27 22:00:27 | INFO  | Flavor SCS-8V-16-50 created 2025-09-27 22:00:29.377046 | orchestrator | 2025-09-27 22:00:28 | INFO  | Flavor SCS-8V-32 created 2025-09-27 22:00:29.377064 | orchestrator | 2025-09-27 22:00:28 | INFO  | Flavor SCS-8V-32-100 created 2025-09-27 22:00:29.377082 | orchestrator | 2025-09-27 22:00:28 | INFO  | Flavor SCS-16V-32 created 2025-09-27 22:00:29.377101 | orchestrator | 2025-09-27 22:00:28 | INFO  | Flavor SCS-16V-32-100 created 2025-09-27 22:00:29.377120 | orchestrator | 2025-09-27 22:00:28 | INFO  | Flavor SCS-2V-4-20s created 2025-09-27 22:00:29.377138 | orchestrator | 2025-09-27 22:00:28 | INFO  | Flavor SCS-4V-8-50s created 2025-09-27 22:00:29.377190 | orchestrator | 2025-09-27 22:00:29 | INFO  | Flavor SCS-8V-32-100s created 2025-09-27 22:00:32.038285 | orchestrator | 2025-09-27 22:00:32 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-09-27 22:00:42.193601 | orchestrator | 2025-09-27 22:00:42 | INFO  | Task 72e3cd24-01b9-46c5-8476-4242514cc1dc (bootstrap-basic) was prepared for execution. 2025-09-27 22:00:42.193721 | orchestrator | 2025-09-27 22:00:42 | INFO  | It takes a moment until task 72e3cd24-01b9-46c5-8476-4242514cc1dc (bootstrap-basic) has been started and output is visible here. 2025-09-27 22:06:25.837484 | orchestrator | 2025-09-27 22:06:25 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-09-27 22:06:25.842395 | orchestrator | 2025-09-27 22:06:25 | INFO  | Task df86e1f8-e436-4efa-9ed1-9db037f7552d (bootstrap-basic) was prepared for execution. 2025-09-27 22:06:25.842474 | orchestrator | 2025-09-27 22:06:25 | INFO  | It takes a moment until task df86e1f8-e436-4efa-9ed1-9db037f7552d (bootstrap-basic) has been started and output is visible here. 2025-09-27 22:11:49.020401 | orchestrator | 2025-09-27 22:11:49.020506 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-09-27 22:11:49.020525 | orchestrator | 2025-09-27 22:11:49.020539 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-27 22:11:49.020551 | orchestrator | Saturday 27 September 2025 22:00:46 +0000 (0:00:00.077) 0:00:00.078 **** 2025-09-27 22:11:49.020561 | orchestrator | ok: [localhost] 2025-09-27 22:11:49.020573 | orchestrator | 2025-09-27 22:11:49.020585 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-09-27 22:11:49.020597 | orchestrator | Saturday 27 September 2025 22:00:48 +0000 (0:00:01.950) 0:00:02.028 **** 2025-09-27 22:11:49.020609 | orchestrator | ok: [localhost] 2025-09-27 22:11:49.020620 | orchestrator | 2025-09-27 22:11:49.020632 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-09-27 22:11:49.020643 | orchestrator | Saturday 27 September 2025 22:00:57 +0000 (0:00:08.940) 0:00:10.968 **** 2025-09-27 22:11:49.020655 | orchestrator | changed: [localhost] 2025-09-27 22:11:49.020667 | orchestrator | 2025-09-27 22:11:49.020678 | orchestrator | TASK [Get volume type local] *************************************************** 2025-09-27 22:11:49.020689 | orchestrator | Saturday 27 September 2025 22:01:05 +0000 (0:00:08.415) 0:00:19.384 **** 2025-09-27 22:11:49.020700 | orchestrator | ok: [localhost] 2025-09-27 22:11:49.020715 | orchestrator | 2025-09-27 22:11:49.020726 | orchestrator | TASK [Create volume type local] ************************************************ 2025-09-27 22:11:49.020737 | orchestrator | Saturday 27 September 2025 22:01:13 +0000 (0:00:08.099) 0:00:27.484 **** 2025-09-27 22:11:49.020748 | orchestrator | changed: [localhost] 2025-09-27 22:11:49.020759 | orchestrator | 2025-09-27 22:11:49.020770 | orchestrator | TASK [Create public network] *************************************************** 2025-09-27 22:11:49.020781 | orchestrator | Saturday 27 September 2025 22:01:22 +0000 (0:00:08.182) 0:00:35.666 **** 2025-09-27 22:11:49.020792 | orchestrator | 2025-09-27 22:11:49.020822 | orchestrator | STILL ALIVE [task 'Create public network' is running] ************************** 2025-09-27 22:11:49.020833 | orchestrator | 2025-09-27 22:11:49.020844 | orchestrator | STILL ALIVE [task 'Create public network' is running] ************************** 2025-09-27 22:11:49.020855 | orchestrator | 2025-09-27 22:11:49.020867 | orchestrator | STILL ALIVE [task 'Create public network' is running] ************************** 2025-09-27 22:11:49.020878 | orchestrator | 2025-09-27 22:11:49.020889 | orchestrator | STILL ALIVE [task 'Create public network' is running] ************************** 2025-09-27 22:11:49.020899 | orchestrator | 2025-09-27 22:11:49.020910 | orchestrator | STILL ALIVE [task 'Create public network' is running] ************************** 2025-09-27 22:11:49.020921 | orchestrator | 2025-09-27 22:11:49.020932 | orchestrator | STILL ALIVE [task 'Create public network' is running] ************************** 2025-09-27 22:11:49.020950 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "extra_data": {"data": null, "details": "504 Gateway Time-out: The server didn't respond in time.", "response": "

504 Gateway Time-out

\nThe server didn't respond in time.\n\n"}, "msg": "HttpException: 504: Server Error for url: https://api.testbed.osism.xyz:9696/v2.0/networks/public, 504 Gateway Time-out: The server didn't respond in time."} 2025-09-27 22:11:49.020993 | orchestrator | 2025-09-27 22:11:49.021005 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:11:49.021017 | orchestrator | localhost : ok=5  changed=2  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-09-27 22:11:49.021028 | orchestrator | 2025-09-27 22:11:49.021039 | orchestrator | 2025-09-27 22:11:49.021129 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:11:49.021142 | orchestrator | Saturday 27 September 2025 22:06:25 +0000 (0:05:03.597) 0:05:39.264 **** 2025-09-27 22:11:49.021154 | orchestrator | =============================================================================== 2025-09-27 22:11:49.021165 | orchestrator | Create public network ------------------------------------------------- 303.60s 2025-09-27 22:11:49.021177 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.94s 2025-09-27 22:11:49.021189 | orchestrator | Create volume type LUKS ------------------------------------------------- 8.42s 2025-09-27 22:11:49.021201 | orchestrator | Create volume type local ------------------------------------------------ 8.18s 2025-09-27 22:11:49.021213 | orchestrator | Get volume type local --------------------------------------------------- 8.10s 2025-09-27 22:11:49.021224 | orchestrator | Gathering Facts --------------------------------------------------------- 1.95s 2025-09-27 22:11:49.021236 | orchestrator | 2025-09-27 22:11:49.021247 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-09-27 22:11:49.021258 | orchestrator | 2025-09-27 22:11:49.021270 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-09-27 22:11:49.021281 | orchestrator | Saturday 27 September 2025 22:06:29 +0000 (0:00:00.072) 0:00:00.072 **** 2025-09-27 22:11:49.021293 | orchestrator | ok: [localhost] 2025-09-27 22:11:49.021304 | orchestrator | 2025-09-27 22:11:49.021315 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-09-27 22:11:49.021326 | orchestrator | Saturday 27 September 2025 22:06:37 +0000 (0:00:07.753) 0:00:07.826 **** 2025-09-27 22:11:49.021338 | orchestrator | skipping: [localhost] 2025-09-27 22:11:49.021350 | orchestrator | 2025-09-27 22:11:49.021361 | orchestrator | TASK [Get volume type local] *************************************************** 2025-09-27 22:11:49.021373 | orchestrator | Saturday 27 September 2025 22:06:37 +0000 (0:00:00.063) 0:00:07.889 **** 2025-09-27 22:11:49.021384 | orchestrator | ok: [localhost] 2025-09-27 22:11:49.021395 | orchestrator | 2025-09-27 22:11:49.021407 | orchestrator | TASK [Create volume type local] ************************************************ 2025-09-27 22:11:49.021418 | orchestrator | Saturday 27 September 2025 22:06:44 +0000 (0:00:07.118) 0:00:15.008 **** 2025-09-27 22:11:49.021431 | orchestrator | skipping: [localhost] 2025-09-27 22:11:49.021442 | orchestrator | 2025-09-27 22:11:49.021454 | orchestrator | TASK [Create public network] *************************************************** 2025-09-27 22:11:49.021485 | orchestrator | Saturday 27 September 2025 22:06:44 +0000 (0:00:00.047) 0:00:15.055 **** 2025-09-27 22:11:49.021499 | orchestrator | 2025-09-27 22:11:49.021510 | orchestrator | STILL ALIVE [task 'Create public network' is running] ************************** 2025-09-27 22:11:49.021524 | orchestrator | 2025-09-27 22:11:49.021536 | orchestrator | STILL ALIVE [task 'Create public network' is running] ************************** 2025-09-27 22:11:49.021548 | orchestrator | 2025-09-27 22:11:49.021559 | orchestrator | STILL ALIVE [task 'Create public network' is running] ************************** 2025-09-27 22:11:49.021571 | orchestrator | 2025-09-27 22:11:49.021583 | orchestrator | STILL ALIVE [task 'Create public network' is running] ************************** 2025-09-27 22:11:49.021596 | orchestrator | 2025-09-27 22:11:49.021607 | orchestrator | STILL ALIVE [task 'Create public network' is running] ************************** 2025-09-27 22:11:49.021619 | orchestrator | 2025-09-27 22:11:49.021630 | orchestrator | STILL ALIVE [task 'Create public network' is running] ************************** 2025-09-27 22:11:49.021650 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "extra_data": {"data": null, "details": "504 Gateway Time-out: The server didn't respond in time.", "response": "

504 Gateway Time-out

\nThe server didn't respond in time.\n\n"}, "msg": "HttpException: 504: Server Error for url: https://api.testbed.osism.xyz:9696/v2.0/networks/public, 504 Gateway Time-out: The server didn't respond in time."} 2025-09-27 22:11:49.021672 | orchestrator | 2025-09-27 22:11:49.021683 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-27 22:11:49.021695 | orchestrator | localhost : ok=2  changed=0 unreachable=0 failed=1  skipped=2  rescued=0 ignored=0 2025-09-27 22:11:49.021707 | orchestrator | 2025-09-27 22:11:49.021718 | orchestrator | 2025-09-27 22:11:49.021730 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-27 22:11:49.021742 | orchestrator | Saturday 27 September 2025 22:11:48 +0000 (0:05:03.821) 0:05:18.877 **** 2025-09-27 22:11:49.021753 | orchestrator | =============================================================================== 2025-09-27 22:11:49.021765 | orchestrator | Create public network ------------------------------------------------- 303.82s 2025-09-27 22:11:49.021777 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.75s 2025-09-27 22:11:49.021789 | orchestrator | Get volume type local --------------------------------------------------- 7.12s 2025-09-27 22:11:49.021800 | orchestrator | Create volume type LUKS ------------------------------------------------- 0.06s 2025-09-27 22:11:49.021812 | orchestrator | Create volume type local ------------------------------------------------ 0.05s 2025-09-27 22:11:49.700268 | orchestrator | ERROR 2025-09-27 22:11:49.700759 | orchestrator | { 2025-09-27 22:11:49.700898 | orchestrator | "delta": "0:11:34.985757", 2025-09-27 22:11:49.700968 | orchestrator | "end": "2025-09-27 22:11:49.468231", 2025-09-27 22:11:49.701050 | orchestrator | "msg": "non-zero return code", 2025-09-27 22:11:49.701106 | orchestrator | "rc": 2, 2025-09-27 22:11:49.701158 | orchestrator | "start": "2025-09-27 22:00:14.482474" 2025-09-27 22:11:49.701208 | orchestrator | } failure 2025-09-27 22:11:49.718934 | 2025-09-27 22:11:49.719069 | PLAY RECAP 2025-09-27 22:11:49.719159 | orchestrator | ok: 22 changed: 9 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2025-09-27 22:11:49.719276 | 2025-09-27 22:11:49.959884 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-09-27 22:11:49.962740 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-27 22:11:51.161730 | 2025-09-27 22:11:51.162000 | PLAY [Post output play] 2025-09-27 22:11:51.198875 | 2025-09-27 22:11:51.199052 | LOOP [stage-output : Register sources] 2025-09-27 22:11:51.273461 | 2025-09-27 22:11:51.273765 | TASK [stage-output : Check sudo] 2025-09-27 22:11:52.212163 | orchestrator | sudo: a password is required 2025-09-27 22:11:52.314941 | orchestrator | ok: Runtime: 0:00:00.016542 2025-09-27 22:11:52.329905 | 2025-09-27 22:11:52.330062 | LOOP [stage-output : Set source and destination for files and folders] 2025-09-27 22:11:52.370614 | 2025-09-27 22:11:52.370941 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-09-27 22:11:52.439763 | orchestrator | ok 2025-09-27 22:11:52.448393 | 2025-09-27 22:11:52.448515 | LOOP [stage-output : Ensure target folders exist] 2025-09-27 22:11:52.901511 | orchestrator | ok: "docs" 2025-09-27 22:11:52.901878 | 2025-09-27 22:11:53.175606 | orchestrator | ok: "artifacts" 2025-09-27 22:11:53.436529 | orchestrator | ok: "logs" 2025-09-27 22:11:53.456858 | 2025-09-27 22:11:53.457026 | LOOP [stage-output : Copy files and folders to staging folder] 2025-09-27 22:11:53.496146 | 2025-09-27 22:11:53.496394 | TASK [stage-output : Make all log files readable] 2025-09-27 22:11:53.799300 | orchestrator | ok 2025-09-27 22:11:53.810297 | 2025-09-27 22:11:53.810441 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-09-27 22:11:53.855539 | orchestrator | skipping: Conditional result was False 2025-09-27 22:11:53.868579 | 2025-09-27 22:11:53.868729 | TASK [stage-output : Discover log files for compression] 2025-09-27 22:11:53.893527 | orchestrator | skipping: Conditional result was False 2025-09-27 22:11:53.908709 | 2025-09-27 22:11:53.908881 | LOOP [stage-output : Archive everything from logs] 2025-09-27 22:11:53.950938 | 2025-09-27 22:11:53.951085 | PLAY [Post cleanup play] 2025-09-27 22:11:53.960216 | 2025-09-27 22:11:53.960328 | TASK [Set cloud fact (Zuul deployment)] 2025-09-27 22:11:54.018562 | orchestrator | ok 2025-09-27 22:11:54.031592 | 2025-09-27 22:11:54.031725 | TASK [Set cloud fact (local deployment)] 2025-09-27 22:11:54.055875 | orchestrator | skipping: Conditional result was False 2025-09-27 22:11:54.069409 | 2025-09-27 22:11:54.069555 | TASK [Clean the cloud environment] 2025-09-27 22:11:54.700723 | orchestrator | 2025-09-27 22:11:54 - clean up servers 2025-09-27 22:11:55.483897 | orchestrator | 2025-09-27 22:11:55 - testbed-manager 2025-09-27 22:11:55.577982 | orchestrator | 2025-09-27 22:11:55 - testbed-node-2 2025-09-27 22:11:55.662075 | orchestrator | 2025-09-27 22:11:55 - testbed-node-3 2025-09-27 22:11:55.749199 | orchestrator | 2025-09-27 22:11:55 - testbed-node-4 2025-09-27 22:11:55.849362 | orchestrator | 2025-09-27 22:11:55 - testbed-node-5 2025-09-27 22:11:55.940121 | orchestrator | 2025-09-27 22:11:55 - testbed-node-1 2025-09-27 22:11:56.032550 | orchestrator | 2025-09-27 22:11:56 - testbed-node-0 2025-09-27 22:11:56.117422 | orchestrator | 2025-09-27 22:11:56 - clean up keypairs 2025-09-27 22:11:56.137372 | orchestrator | 2025-09-27 22:11:56 - testbed 2025-09-27 22:11:56.167536 | orchestrator | 2025-09-27 22:11:56 - wait for servers to be gone 2025-09-27 22:12:07.000707 | orchestrator | 2025-09-27 22:12:07 - clean up ports 2025-09-27 22:12:07.214075 | orchestrator | 2025-09-27 22:12:07 - 26bbbba3-53b6-41d6-b38a-f7c40ab68308 2025-09-27 22:12:07.472603 | orchestrator | 2025-09-27 22:12:07 - 297f1a01-2d26-49b1-8b73-2efdd04a7cc1 2025-09-27 22:12:07.723817 | orchestrator | 2025-09-27 22:12:07 - 7b43e01b-6892-4a60-b169-7312e9835641 2025-09-27 22:12:07.991347 | orchestrator | 2025-09-27 22:12:07 - 9c0161a3-a7d0-4713-847c-f1dac38a37e2 2025-09-27 22:12:08.235232 | orchestrator | 2025-09-27 22:12:08 - c9a1da41-c467-4574-90ee-69725f63a859 2025-09-27 22:12:08.482147 | orchestrator | 2025-09-27 22:12:08 - da5e3151-a9b9-4cc0-863b-948b7bfe5cf3 2025-09-27 22:12:08.956662 | orchestrator | 2025-09-27 22:12:08 - edb3e73f-a101-4fdb-9a4b-055df060b2c2 2025-09-27 22:12:09.196583 | orchestrator | 2025-09-27 22:12:09 - clean up volumes 2025-09-27 22:12:09.332620 | orchestrator | 2025-09-27 22:12:09 - testbed-volume-2-node-base 2025-09-27 22:12:09.375172 | orchestrator | 2025-09-27 22:12:09 - testbed-volume-5-node-base 2025-09-27 22:12:09.423612 | orchestrator | 2025-09-27 22:12:09 - testbed-volume-manager-base 2025-09-27 22:12:09.466852 | orchestrator | 2025-09-27 22:12:09 - testbed-volume-1-node-base 2025-09-27 22:12:09.510126 | orchestrator | 2025-09-27 22:12:09 - testbed-volume-0-node-base 2025-09-27 22:12:09.559256 | orchestrator | 2025-09-27 22:12:09 - testbed-volume-3-node-base 2025-09-27 22:12:09.604991 | orchestrator | 2025-09-27 22:12:09 - testbed-volume-5-node-5 2025-09-27 22:12:09.655866 | orchestrator | 2025-09-27 22:12:09 - testbed-volume-7-node-4 2025-09-27 22:12:09.703585 | orchestrator | 2025-09-27 22:12:09 - testbed-volume-6-node-3 2025-09-27 22:12:09.748062 | orchestrator | 2025-09-27 22:12:09 - testbed-volume-4-node-base 2025-09-27 22:12:09.790857 | orchestrator | 2025-09-27 22:12:09 - testbed-volume-2-node-5 2025-09-27 22:12:09.836979 | orchestrator | 2025-09-27 22:12:09 - testbed-volume-3-node-3 2025-09-27 22:12:09.880156 | orchestrator | 2025-09-27 22:12:09 - testbed-volume-8-node-5 2025-09-27 22:12:09.922407 | orchestrator | 2025-09-27 22:12:09 - testbed-volume-0-node-3 2025-09-27 22:12:09.965884 | orchestrator | 2025-09-27 22:12:09 - testbed-volume-1-node-4 2025-09-27 22:12:10.008293 | orchestrator | 2025-09-27 22:12:10 - testbed-volume-4-node-4 2025-09-27 22:12:10.051603 | orchestrator | 2025-09-27 22:12:10 - disconnect routers 2025-09-27 22:12:10.169146 | orchestrator | 2025-09-27 22:12:10 - testbed 2025-09-27 22:12:11.682830 | orchestrator | 2025-09-27 22:12:11 - clean up subnets 2025-09-27 22:12:11.722897 | orchestrator | 2025-09-27 22:12:11 - subnet-testbed-management 2025-09-27 22:12:11.892408 | orchestrator | 2025-09-27 22:12:11 - clean up networks 2025-09-27 22:12:12.073024 | orchestrator | 2025-09-27 22:12:12 - net-testbed-management 2025-09-27 22:12:12.345579 | orchestrator | 2025-09-27 22:12:12 - clean up security groups 2025-09-27 22:12:12.385970 | orchestrator | 2025-09-27 22:12:12 - testbed-management 2025-09-27 22:12:12.511273 | orchestrator | 2025-09-27 22:12:12 - testbed-node 2025-09-27 22:12:12.623412 | orchestrator | 2025-09-27 22:12:12 - clean up floating ips 2025-09-27 22:12:12.656834 | orchestrator | 2025-09-27 22:12:12 - 81.163.193.199 2025-09-27 22:12:13.072135 | orchestrator | 2025-09-27 22:12:13 - clean up routers 2025-09-27 22:12:13.173445 | orchestrator | 2025-09-27 22:12:13 - testbed 2025-09-27 22:12:14.675418 | orchestrator | ok: Runtime: 0:00:20.248261 2025-09-27 22:12:14.678234 | 2025-09-27 22:12:14.678345 | PLAY RECAP 2025-09-27 22:12:14.678417 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-09-27 22:12:14.678453 | 2025-09-27 22:12:14.819157 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-27 22:12:14.821431 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-27 22:12:15.526574 | 2025-09-27 22:12:15.526726 | PLAY [Cleanup play] 2025-09-27 22:12:15.543081 | 2025-09-27 22:12:15.543210 | TASK [Set cloud fact (Zuul deployment)] 2025-09-27 22:12:15.597032 | orchestrator | ok 2025-09-27 22:12:15.605212 | 2025-09-27 22:12:15.605344 | TASK [Set cloud fact (local deployment)] 2025-09-27 22:12:15.629539 | orchestrator | skipping: Conditional result was False 2025-09-27 22:12:15.643727 | 2025-09-27 22:12:15.643902 | TASK [Clean the cloud environment] 2025-09-27 22:12:16.802142 | orchestrator | 2025-09-27 22:12:16 - clean up servers 2025-09-27 22:12:17.299288 | orchestrator | 2025-09-27 22:12:17 - clean up keypairs 2025-09-27 22:12:17.319535 | orchestrator | 2025-09-27 22:12:17 - wait for servers to be gone 2025-09-27 22:12:17.364383 | orchestrator | 2025-09-27 22:12:17 - clean up ports 2025-09-27 22:12:17.441951 | orchestrator | 2025-09-27 22:12:17 - clean up volumes 2025-09-27 22:12:17.528984 | orchestrator | 2025-09-27 22:12:17 - disconnect routers 2025-09-27 22:12:17.561844 | orchestrator | 2025-09-27 22:12:17 - clean up subnets 2025-09-27 22:12:17.585158 | orchestrator | 2025-09-27 22:12:17 - clean up networks 2025-09-27 22:12:17.711817 | orchestrator | 2025-09-27 22:12:17 - clean up security groups 2025-09-27 22:12:17.750720 | orchestrator | 2025-09-27 22:12:17 - clean up floating ips 2025-09-27 22:12:17.778694 | orchestrator | 2025-09-27 22:12:17 - clean up routers 2025-09-27 22:12:18.181262 | orchestrator | ok: Runtime: 0:00:01.396746 2025-09-27 22:12:18.185268 | 2025-09-27 22:12:18.185430 | PLAY RECAP 2025-09-27 22:12:18.185551 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-09-27 22:12:18.185612 | 2025-09-27 22:12:18.310004 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-27 22:12:18.312423 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-27 22:12:19.025400 | 2025-09-27 22:12:19.025557 | PLAY [Base post-fetch] 2025-09-27 22:12:19.040919 | 2025-09-27 22:12:19.041052 | TASK [fetch-output : Set log path for multiple nodes] 2025-09-27 22:12:19.097498 | orchestrator | skipping: Conditional result was False 2025-09-27 22:12:19.104417 | 2025-09-27 22:12:19.104558 | TASK [fetch-output : Set log path for single node] 2025-09-27 22:12:19.154362 | orchestrator | ok 2025-09-27 22:12:19.165151 | 2025-09-27 22:12:19.165322 | LOOP [fetch-output : Ensure local output dirs] 2025-09-27 22:12:19.654424 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/cfb964a163214dfcab0d7f04ee6fb101/work/logs" 2025-09-27 22:12:19.927504 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/cfb964a163214dfcab0d7f04ee6fb101/work/artifacts" 2025-09-27 22:12:20.233470 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/cfb964a163214dfcab0d7f04ee6fb101/work/docs" 2025-09-27 22:12:20.285199 | 2025-09-27 22:12:20.285589 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-09-27 22:12:21.235869 | orchestrator | changed: .d..t...... ./ 2025-09-27 22:12:21.236231 | orchestrator | changed: All items complete 2025-09-27 22:12:21.236299 | 2025-09-27 22:12:21.988248 | orchestrator | changed: .d..t...... ./ 2025-09-27 22:12:22.748068 | orchestrator | changed: .d..t...... ./ 2025-09-27 22:12:22.772337 | 2025-09-27 22:12:22.772480 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-09-27 22:12:22.801466 | orchestrator | skipping: Conditional result was False 2025-09-27 22:12:22.806266 | orchestrator | skipping: Conditional result was False 2025-09-27 22:12:22.824717 | 2025-09-27 22:12:22.824881 | PLAY RECAP 2025-09-27 22:12:22.824957 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-09-27 22:12:22.824995 | 2025-09-27 22:12:22.944992 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-27 22:12:22.947329 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-27 22:12:23.679703 | 2025-09-27 22:12:23.679876 | PLAY [Base post] 2025-09-27 22:12:23.694243 | 2025-09-27 22:12:23.694375 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-09-27 22:12:24.750522 | orchestrator | changed 2025-09-27 22:12:24.761831 | 2025-09-27 22:12:24.761971 | PLAY RECAP 2025-09-27 22:12:24.762050 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-09-27 22:12:24.762127 | 2025-09-27 22:12:24.881991 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-27 22:12:24.883744 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-09-27 22:12:25.650357 | 2025-09-27 22:12:25.650518 | PLAY [Base post-logs] 2025-09-27 22:12:25.660950 | 2025-09-27 22:12:25.661074 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-09-27 22:12:26.114493 | localhost | changed 2025-09-27 22:12:26.136679 | 2025-09-27 22:12:26.136884 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-09-27 22:12:26.164186 | localhost | ok 2025-09-27 22:12:26.168239 | 2025-09-27 22:12:26.168351 | TASK [Set zuul-log-path fact] 2025-09-27 22:12:26.183445 | localhost | ok 2025-09-27 22:12:26.192241 | 2025-09-27 22:12:26.192344 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-27 22:12:26.216716 | localhost | ok 2025-09-27 22:12:26.219664 | 2025-09-27 22:12:26.219761 | TASK [upload-logs : Create log directories] 2025-09-27 22:12:26.707202 | localhost | changed 2025-09-27 22:12:26.712069 | 2025-09-27 22:12:26.712238 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-09-27 22:12:27.217434 | localhost -> localhost | ok: Runtime: 0:00:00.006640 2025-09-27 22:12:27.224999 | 2025-09-27 22:12:27.225163 | TASK [upload-logs : Upload logs to log server] 2025-09-27 22:12:27.782302 | localhost | Output suppressed because no_log was given 2025-09-27 22:12:27.786330 | 2025-09-27 22:12:27.786508 | LOOP [upload-logs : Compress console log and json output] 2025-09-27 22:12:27.849103 | localhost | skipping: Conditional result was False 2025-09-27 22:12:27.854054 | localhost | skipping: Conditional result was False 2025-09-27 22:12:27.868118 | 2025-09-27 22:12:27.868344 | LOOP [upload-logs : Upload compressed console log and json output] 2025-09-27 22:12:27.926953 | localhost | skipping: Conditional result was False 2025-09-27 22:12:27.927527 | 2025-09-27 22:12:27.929949 | localhost | skipping: Conditional result was False 2025-09-27 22:12:27.944554 | 2025-09-27 22:12:27.944778 | LOOP [upload-logs : Upload console log and json output]